Concepts

Encouraging Productive Rooftops

This is an outline for a public policy which would encourage the productive usage of rooftop areas, to optimize space utlization in urban areas.

Many uses can be classified as productive, and various indexes can be applied to prioritize usage based on the needs of the community.

Examples of productivity:

  • 🗣 Socially Productive

    • 💕 Things which encourage community bonding and socialization, such as:

    • 🛋 Lounges, 🌳 parks, 👙 pools, 🍻 bars, 🍔 restaurants, etc.

  • ⚡️ Ecologically productive

    • ☀️ Sustainable electricity production: Solar panels, wind farms, etc.

    • 🌽 Sustainable agriculture: Food, Vegetable Gardens, Hydroponic farms, etc.

Source: Note Jun 29, 2019

Source: Note Jun 29, 2019

Here’s a picture of a building in Chicago that I noticed would have both productive and unproductive surfaces, possibly making the building break even in terms of taxes/subsidies.

affinity export.png

Here’s a gif of my initial sketch that I manually drew, until I realized that it’d be a lot easier to make in an illustrator app, like Affinity. Luckily it serves as a eye-catching thumbnail.

no text.gif

Please drop a comment below if you’ve got any thoughts about this idea, or hop into the conversation on twitter!

A 3D Model for Every Building

When you search for a building, you see pictures and a map. However, If you were in a sci-fi movie, you’d probably see an interactive 3D model of the building. Google could bring us into the future.

This is a proposal for Google Maps to use their 3D imagery to build a collection of isolated buildings for use elsewhere on the web.

These isolated buildings could be used in Google search previews, embedded in Wikipedia pages, and downloaded for 3D modeling projects.

A diagram showing the isolated building (blue) being transcluded into other places on the web.

Here’s this same diagram with actual screenshots of the 3D view of One World Trade Center on Google Maps, with arrows pointing to where an isolated 3D model could be embedded.

The top section illustrations that this isolation could be pretty easily executed by extruding the footprint of every building up to the height of the building. This works for most buildings, since the footprint of buildings are typically larger than the top sections. For more complex buildings, however, these models could be manually isolated.

If you know anybody who works on the Google Maps team, please send this to them!

Feedback and other ideas are welcome in the comment section below!

Detecting Bias in Search Engines

Perception is reality. For us internet-dwellers, our reality is shaped by the big internet companies, Google, Facebook, Twitter ... this is how many of us get our news. 

The problems with social media news have become a hot topic around 2016's presidential election season, but in my opinion, a bigger problem exists. It's well known that tabloid clickbait blows up on social sites, but when you go to validate that information, you go to search engines. 

This is where it gets spooky. Search engines are supposed to aid the fact-checkers of the internet, helping find the most relevant and truthful information. 

 

Site Preview in Search Results

This is a user interface mockup for a search result page that previews fully interactive mobile pages, in aim to improve the speed at which users can access relevant information.

It's not perfect, and perhaps it addresses a need that's not there. But with a few tweaks, this could be A/B tested to validate its usefulness. Just an idea.

Heres a blockframe comparison between a typical search results page interface, and the proposed interface. This is useful to weigh the pros and cons of each, for example, the left can show more results in a single viewport than the right. However, the right allows people to look over the site instead of having to visit each individually.

Initial concept sketched in my journal.

Photo of my Affinity Artboard, which somewhat illustrates my workflow, as I work from top to bottom.

Screen Shot 2017-04-05 at 7.25.20 PM.jpg

Improved Rating System

This is a concept for a rating system centered around a collapsible hierarchical list, so users can rate sub-items within categories. This enables users to add nuanced opinions without writing long reviews, which saves time for everyone. Less time stressing about writing a beautifully written review without typos, and fewer internal man-hours parsing poorly written reviews.

Restaurant Rating Example

Comparison between a written review and a multi-level rating system.

Not only is the image on the right easier for humans to read (given a few UI tweaks and different visualizations) but it's also easier for computers to understand. This makes data analysis much easier, since there's no need to extract nuance from written reviews. 

This concept could be deployed effectively for nearly all rating systems. App Store app ratings, Google Maps location ratings, Amazon product ratings, LinkedIn job ratings, etc. 

Suggested applications from left to right: Google Play Store, Apple App Store, Google Maps, Apple Maps, Amazon, Netflix.


I spent some time looking for the best user interface for nested lists and hierarchies. I used the macOS style for my mockup because it's simple and clean, but if you know of any prettier interfaces, please drop me a line in the comments below! 

View this article on Medium.

Song Deconstruction

When you listen to music, you can usually recognize individual instruments. The well-trained ears of musicians can identify complex aspects of musical composition: time signature, key, acoustic effects, etc. 

Music is just vibrations. Squiggles on a line graph of [air pressure] over [time]. All known characteristics of music are borne of the human mind's aptitude for recognizing patterns.

In the same way that computer vision is positioned to soon surpass human vision, we can develop "computer hearing" software to allow us to understand music in new ways. Specifically, an interface to visualize the instrumental building blocks of songs, to reveal the science behind the magic of music. 

Instrument Recognition

First, our analysis model must be able to identify instruments. Artificial neural networks could be trained to 'guess' which instrument made a sound, based on their training data.

Graphical representation of an instrument-recognizing probabilistic function. < % instrument probability > ex: Piano, Guitar, etc. 

One way we could train these networks would be to let the program identify patterns on its own, isolate the sounds, and ask us humans to classify the sound with text. This could be a crowdsourced effort, similar to how Google trains some of its neural networks with its "Crowdsource" app. 

instrument_name = input() asks the user for an instrument label, so the network can learn the names of the different sounds.

Song Reconstruction

After we have a program that can recognize instruments, the next engineering hurdle will be to improve the ability for the program to recreate the song. A "turing complete" robotic musician -- one that makes music that sounds like a human created it. 

Some may decry this as a dystopian "death of creativity", ushering in a future where musicians are put out of business by robots. This doesn't have to be the case. 

The purpose of the reconstruction process is to improve the ability to interpret and replicate nuance. Once that nuance is quantified, it can be visualized for us humans to make even more complex and creative musical works. 

Instrument detection and reconstruction will output musician-friendly MIDI tracks that can be imported into professional music production software.

It would probably be practical to deploy different neural networks to detect different types of patterns, since there are different scopes of patterns ranging from high-level rhythms and melodies, and low-level patterns such as specific sounds and effects. 

Different networks could be deployed to identify different pattern-levels. Screenshot taken from Workflowy, graphics made in Affinity Designer.

Let's build this. Feel free to promote your GitHub project in the comment section below.

Suggested resources: 

  • TensorFlow  python library for building and training neural networks. 
  • TensorBoard for visualizing data from TensorFlow.
  • if you find any related open-source projects online, share them with me and I'll put them in this list. 

Check out the video explanation of this idea, the outline on Workflowy, and feel free to join the discussion on Reddit and Twitter.

 

Vote for the best song reconstruction

If there are multiple attempts at song reconstruction and componentization, it would be cool to be able to compare them against each other as well as against the source song, to vote for the most accurate reconstruciton. 

Screen Shot 2018-01-24 at 5.55.32 PM.png

A Crowdsourced EPA to Monitor Water Quality

Water is vital to life. The crisis in Flint, MI has caused many people to question the quality of their drinking water, as well as the government oversight of such public utilities.

The government does conduct studies to monitor drinking water quality, but nobody would disagree that we'd be better off if average citizens could double-check those results.

Just a map.

In short, the end-all solution to this problem is quite simply a map. Crowdsourced, powered by the people, and any other groups that want to contribute their data. Get advanced water testing devices in the hands of millions of communities by reducing costs and ease-of-use, and let those communities contribute to the map.

The data produced by the amateur test kits will certainly not be of the highest quality, but as the number of data points increases, the more reliable the data becomes -- this correlation is described by the Law of Large Numbers theorem.

Gamify Public Health

An effective way to get people to contribute to this study is to offer them rewards. No, you don't have to have a ton of money to hand out to them; give them psychological rewards -- the reward of feeling like they're helping their community. Similar to how Google Maps gamifies maps contributions, this app could show the user graphs of their contributions over time, stats about how many contaminants they've detected, making them feel like the local hero for discovering hazards, and helping make the community a safer place.

Not only is the map the final resting place for the test data, but it can be used to help communities see which areas have out-of-date data, or no data at all, so they can work to fill in the gaps on the map. 

Visualization

It's important that this data is actionable; something you can take to your town hall or administrative body to say "here's where the problem is, please fix it." 

Simple data visualization can help amateurs identify the likely location of a contaminant. These visualizations can be created by connecting the dots and filling in the gaps, a mathematical process called interpolation. 

These visualizations are used to identify the probable location of a contaminant, as well as areas that need testing (2a: "missing data")

These visualizations are used to identify the probable location of a contaminant, as well as areas that need testing (2a: "missing data")

To even further improve accuracy, the data (1b) can be "wrapped" to fit publicly available schematics (1a) of water utility lines, drainage sewers, and hydrological flood studies. This would help place the data points within a logical context, answering questions like "what's upstream?" and "what's downstream?"

One method of achieving this visualization would be "Surface fitting" (3a) which is a type of interpolation for 3D curves.

Now what?

Let's build it.

If you've got the skills and you'd like to take a stab at this, feel free to send your github project link in the comments at the bottom of this page. (click here if you don't see the comment box) With your permission, I'll promote your project in the body of the article.

Check out the video explanation of this idea, the outline on Workflowy, and feel free to join the discussion on Reddit (#1, #2) and Twitter.

Making 3D Cloud Imagery with Ground-based Imaging Systems

Imagine a weather app that lets you see detailed 3D models of the massive cloud structures above; the elegant wisps of cirrus, the bulky storm clouds as they roll across the sky… we have the technology to this, we just haven’t done it yet.

Perspective sketch of a cumulonimbus storm cloud above a city.

Perspective sketch of a cumulonimbus storm cloud above a city.

Screen Shot 2018-01-30 at 12.05.44 PM.png

The concept is simple: a network of cameras pointed at the sky, spread far apart from each other to capture different perspectives, and we use software to combine all the images into a live interactive 3D model.

This technique is called photogrammetry, and is widely used for making 3D models of surface features using aerial drones. 

Illustration of ground-based camera array using geometry to calculate three-dimensional cloud positions.

Illustration of ground-based camera array using geometry to calculate three-dimensional cloud positions.

Deployment

In terms of hardware, “Whole Sky Cameras” (wiki) could be used to capture hemispheric images of everything above the horizon. A live stream would then be setup, (similar to WU’s weather webcam system) and location data would be added to the webcam.

Initially, a company wanting to pioneer this idea could blanket one specific area with webcam coverage, as a proof of concept to showcase the 3D models. As the network grows, the site would automatically merge other people’s webcams into the global crowdsourced model.

Final Thoughts

Awe-inspiring sculptures of colorful sunsets, meandering tornados, and billowing volcanic eruptions — That’s what we can capture with this technology.

Not only would this satisfy the curious mind, this tech can also be valuable for research, atmospheric studies (meteorological prediction improvements, fluid dynamics) flight simulation models, etc.

If you'd like to contribute to this idea, feel free to share your github project in the comment section below (click here if you don't see the comment box) and with your permission I'll embed it in this article. 

Check out the video explanation of this idea, the outline in Workflowy, and join the discussion on Reddit and Twitter.

View this article on Medium.

Detecting Poor Road Quality Using Our Phones

Roads are constantly falling apart. That’s just what happens. Sometimes the roads in greatest need of repair are neglected for long periods of time, but we can change that. We can use our collective computing power to help us prioritize which roads need to be fixed first.

The idea is to build a mobile application that records vibrations while you drive, and anonymously tracks your location to help build a crowdsourced map of how bumpy the roads are.

Vibrations due to cracks, uneven pavement, potholes — all recorded by the device’s accelerometer.

Vibrations due to cracks, uneven pavement, potholes — all recorded by the device’s accelerometer.

Given three example vibration datasets, the algorithm (statistical regression, probably, idk I’m not a data scientist) is able to determine which vibrations are road-related and which are anomalies, such as someone fumbling their phone around.

Comparing datasets against one another increases the quality of the roadmap, automatically identifying user errors.

Comparing datasets against one another increases the quality of the roadmap, automatically identifying user errors.

Read about the Law of Large Numbers for a mathematical explanation of the relationship between number of data inputs & accuracy.

Check out the video explanation of this idea, the outline in Workflowy, and feel free to join the discussion on Reddit (link 1link 2) and Twitter