Genius Chords Viewer

You're having a jam session with your friends. When you look up song chords, every search result comes up with ugly, non-mobile-friendly sites.

This is a plea for to help bring song chords into the 21st century. 

Given their strong community of contributors, it'd be reasonable to imagine that some users would be willing to contribute chords, so others can learn how to play the songs.

Here's a prototype of this idea, along with some related diagrams.

I tried to find a toggle on their website, to conform to their current design, but I came up empty handed. I decided to pick Google's Material Design toggle slider, as the Genius design somewhat resembles Material Design with its flat, block-oriented UI elements, as well as its bright colors. (The purple used in this post is similar to the pastel purple used on 

Rebuilding the Google Material Design toggle switch.

I would've liked to have built an animated .gif of this animation, but this diagram will do for now. Basically the chords (purple) ought to slide out from underneath the lyrics (grey), with an "ease in" behavior for both opacity and position.

Animation overview.

In addition to the animation helping the users to make sense of the relationships between the chords and lyrics, whitespace helps to separate chord/lyric pairs from one another, to reduce confusion.

Block spacing.

Spotify Notification Drawer Player

In an attempt to reclaim some wasted space, I redesigned some of the UI elements in the Spotify Player in the Android Notification Drawer. The album art is larger, unnecessary text has been removed, and playback buttons are closer together to make them easier to reach with one-handed operation (sorry lefties -- maybe there can be a left-hand mode with the buttons on the left)

Hopefully this design looks a bit more balanced, let me know what you think in the comment box below!


  • Quickly sketch idea in my journal while sitting at a stop light on my morning commute.
  • Take screenshot of the Spotify player. (again, while on my commute)
  • After work, take another screenshot so I don't have Google Maps navigating in the background.
  • Take photo of journal page.
  • Import photo into Affinity project.
  • Make crude sketch in Affinity. (Fig. 1.)
  • Build blockframe by placing transparent rectangles on the screenshot, then colorizing them. (Fig. 2.)
  • Copy the original blockframe & start prototyping the new design. (Above)
Fig. 1. Digital wireframe sketch with markups in red.

Fig. 1. Digital wireframe sketch with markups in red.

Fig. 2. Iterative blockframe design using transparent objects for placement.


Show Album Artwork

Feature request for Spotify to show Album Artwork in the list view, to improve usability. This would allow users to quickly scroll through a list to find specific songs based on the color of the album, instead of trying to speed-read song titles. 

Related links: Twitter thread with links to the Spotify Community request, where pretty much all of the comments are bashing Spotify for not having this feature.

Song Deconstruction

When you listen to music, you can usually recognize individual instruments. The well-trained ears of musicians can identify complex aspects of musical composition: time signature, key, acoustic effects, etc. 

Music is just vibrations. Squiggles on a line graph of [air pressure] over [time]. All known characteristics of music are borne of the human mind's aptitude for recognizing patterns.

In the same way that computer vision is positioned to soon surpass human vision, we can develop "computer hearing" software to allow us to understand music in new ways. Specifically, an interface to visualize the instrumental building blocks of songs, to reveal the science behind the magic of music. 

Instrument Recognition

First, our analysis model must be able to identify instruments. Artificial neural networks could be trained to 'guess' which instrument made a sound, based on their training data.

Graphical representation of an instrument-recognizing probabilistic function. < % instrument probability > ex: Piano, Guitar, etc. 

One way we could train these networks would be to let the program identify patterns on its own, isolate the sounds, and ask us humans to classify the sound with text. This could be a crowdsourced effort, similar to how Google trains some of its neural networks with its "Crowdsource" app. 

instrument_name = input() asks the user for an instrument label, so the network can learn the names of the different sounds.

Song Reconstruction

After we have a program that can recognize instruments, the next engineering hurdle will be to improve the ability for the program to recreate the song. A "turing complete" robotic musician -- one that makes music that sounds like a human created it. 

Some may decry this as a dystopian "death of creativity", ushering in a future where musicians are put out of business by robots. This doesn't have to be the case. 

The purpose of the reconstruction process is to improve the ability to interpret and replicate nuance. Once that nuance is quantified, it can be visualized for us humans to make even more complex and creative musical works. 

Instrument detection and reconstruction will output musician-friendly MIDI tracks that can be imported into professional music production software.

It would probably be practical to deploy different neural networks to detect different types of patterns, since there are different scopes of patterns ranging from high-level rhythms and melodies, and low-level patterns such as specific sounds and effects. 

Different networks could be deployed to identify different pattern-levels. Screenshot taken from Workflowy, graphics made in Affinity Designer.

Let's build this. Feel free to promote your GitHub project in the comment section below.

Suggested resources: 

  • TensorFlow  python library for building and training neural networks. 
  • TensorBoard for visualizing data from TensorFlow.
  • if you find any related open-source projects online, share them with me and I'll put them in this list. 

Check out the video explanation of this idea, the outline on Workflowy, and feel free to join the discussion on Reddit and Twitter.


Vote for the best song reconstruction

If there are multiple attempts at song reconstruction and componentization, it would be cool to be able to compare them against each other as well as against the source song, to vote for the most accurate reconstruciton. 

Screen Shot 2018-01-24 at 5.55.32 PM.png