I’m a huge fan of the intersection of science, technology, and art—where the distinguishing traits of humanity come together to produce some of the most awe-inspiring creations in our known universe. A couple of years ago I discovered an inspiring piece of engineering and art which aims to visualize the complexity and elegance of the human brain and the beautifully choreographed ballet of information that continuously travels through its billions of neurons as you experience each moment of your life.

Created by neuroscientist and artist Greg Dunn, the piece, titled Self-Reflected, struck all the right chords for my tastes and interests. I hemmed and hawed about buying it for over a year before finally deciding that I would splurge for Christmas and use it as an excuse to undertake a bit of a hobby project for myself. You can find all the details you might care to know about Self-Reflected and how it was made here. The rest of this post is about my efforts to get the most out of it. If you’re not interested in the details, you can just watch the video of the final result above.



Self-Reflected is an artistic rendering of an oblique mid-sagittal slice of the human brain; here is an image showing the location and orientation of such a slice. The piece is physically realized as a micro-etched print, which means that a fixed light source pointed at it is reflected differentially at neighboring points very close together on the etching. This technique produces visually interesting effects even with a static light source, but is most evidently impressive when the light source is moved relative to the etching.

The movement of a light source from side to side produces an animated effect in the etching that brings the rendering to life in a surprising and visceral way, giving the appearance of electrical impulses traveling along the axons and dendrites of the neurons depicted in the etching. Varying the intensity, speed, and color of the light source produces an endless array of animations, some of which you can see in the video I recorded above.

Since the purchase of Self-Reflected includes only the etching itself, I needed to build a lighting rig to mount over it in order to realize its full potential. I’ve documented the steps I took and design choices I made when building the lighting rig and control unit here for anyone potentially interested in doing something similar.

Lighting and Power

In order to be able to enjoy the piece from a reasonable vantage point, I needed to animate a light source programmatically, rather than stand over the etching and wave a light back and forth manually (this would get tiring). I did some brief searching, asked a friend, and found that NeoPixels were a popular choice for artistic lighting projects. NeoPixels are individually addressable LED lights that can be controlled via digital micro-controllers like an Arduino or Raspberry Pi. Technically NeoPixels are AdaFruit’s brand of addressable RGB LEDs using the WS2812 drivers/protocol. They are shipped in various configurations, but most commonly as a linear strip, which is exactly what I needed.

I purchased a one meter strip of NeoPixel equivalents and started reading up on what I needed to program them. AdaFruit’s site was super helpful in figuring out what I needed and how to put everything together. They recommended powering the strip separately from the micro-controller used to control them, since the LEDs need a lot more power than the chip. I purchased a 5V 2A switching power supply to power the strip, a female DC power adapter to connect to the leads on the strip, and a 4700uF 10V capacitor to put across the terminals; the last component was recommended by AdaFruit to prevent any initial rush of current from damaging the pixels.

There are options for powering the NeoPixels via batteries, but since the rig was going to be mounted stationary over the etching I didn’t bother exploring them much. I could just leave the whole thing plugged in all the time and not worry about charging batteries, though the cables are admittedly a bit ugly.

With these parts assembled, I connected the power supply, adapter, and capacitor to the strip and plugged it in, lighting up the strip. So far so good. Now I needed to figure out how to control them.


I wanted to be able to control the lighting rig from my phone, both to avoid needing to get up on a chair to push buttons on the controller and to be able to customize properties of the lights easily. I looked up some popular micro-controllers and settled on AdaFruit’s Feather Huzzah ESP8266 which is sufficiently small and has a built-in WiFi module. Once I had the Feather, I connected it to my laptop over USB and followed AdaFruit’s guide to interacting with it using the Arduino IDE. Now I needed to connect the NeoPixel strip to the Feather.

I soldered connector wires from the ground and data leads on the strip to the appropriate pins on the Feather. At this point I was able to turn both the strip and the Feather on without anything catching on fire, and they seemed to work properly. The strip still only turned on with all pixels at full white though. To make any changes to their color and brightness I needed to actually send some data to them.

The NeoPixels Arduino library is open source and lets you program a set of NeoPixels from an Arduino through a simple interface. I loaded one of the samples in the library onto the Feather through the Arduino IDE to test the full setup and things seemed to work fine. Two things left: write a program to move the lights in a pattern that best fits the purpose of Self-Reflected, and find a way to customize a few properties of this program over WiFi so I don’t need to make code changes to adjust them.

For the latter step I settled on the Blynk IoT platform, which provides a user-friendly way to create widgets in a phone app that you can tie to “virtual pins” on your Arudino by writing functions that reference Blynk’s libraries to send/receive data to/from those “pins”. Blynk is a paid service, but free for a single-user, single-device project, which is all I needed. Here’s a shot of the set of widgets I chose for the lighting controls.

The widgets let me turn the whole strip and off, turn the light chase animation on and off, set the color and brightness of the lights, the speed of the chase animation, and the width of the little Gaussian bumps that produce the animation effect when they move across the strip.

The Arduino program that animates the lights and communicates with the Blynk app is fairly simple. Here’s a gist of the code, with my network details redacted.

With the control unit working and the code written, the last step was to mount the whole rig and fine tune the settings.


I needed to mount the light strip above the etching, facing down toward the ground to get the proper effect. This required a custom mount, which I built from scrap wood and a small hinge I got from Home Depot.

I wanted the whole mount to be easily detachable from the wall to make servicing and experimenting with the light strip easy. The base of the mount is a horizontal wooden bar, which I just hook onto a couple of screws in the wall using picture mounting brackets I screwed into the back of the bar. A cross bar comes out from the base bar to put distance between the wall and the light strip. The mounting bar for the light strip is a long (4 feet) thin piece of wood slightly wider than the light strip itself, and this bar is attached to the cross bar with a small metal hinge so that I could modify the angle between the strip and vertical somewhat after construction without needing to recut anything. I stained the whole mount structure with a dark wood stain to better match with the etching frame and my furniture.

I mounted the NeoPixel strip to the cross bar using a metal casing strip designed to hold the light strip flat in place. The casing comes with a translucent strip cover that slides over the casing to smooth out the light coming from the strip and make it seem more continuous, rather than a sequence of individual LEDs.

I wanted some kind of case to put the Feather and connecting wires in so that I didn’t have to attach them directly to the wood and have loose wires hanging off of it. I found this page on AdaFruit’s site providing modular CAD models of different types of cases for the Feather, which could be 3D printed. I downloaded the parts I wanted (the Feather case with mounting tabs and the topper with header holes) and had them 3D printed by 3D Hubs for a reasonable price.

In the end, because I’m a terrible electrical engineer and not much of a handyman, the case didn’t wind up providing much in the way of cleaning up the design of the mount, but it’s better than nothing. There are still wires sticking out un-aesthetically, but they’re not really visible from below when it’s mounted above the etching. Things aren’t perfectly straight either, but I’m calling diminishing returns on spending more time on it. Here are a couple of photos of the final (janky) version of the mount (yes there was duct tape involved).

End Result

After sneaking an hour or two here and there every couple of weeks since Christmas working all of these steps out, I finally finished the damned thing. Or at least I’ve put as much time into it as I care to. The video at the top of the post gives you a sense of the piece as it was meant to be viewed (I hope). Below is a photo of the final result mounted over the etching (yes I know it’s a little crooked; diminishing returns). I learned a few things working on this, but mostly I’m happy that I now have an animated brain in my bedroom.



Think Slower

tl;dr: Read this book, even if it’s the only one you ever read.

Earlier this year I read The Undoing Project by Michael Lewis (author of Moneyball and The Big Short), an account of the unique relationship between the author of the above book, Daniel Kahneman, and his colleague Amos Tversky. It details their collaboration in redefining theories of decision making and behavioral economics in the 70s and 80s. Thinking Fast and Slow, which I read last year, is an excellent compendium of Kahneman and Tversky’s research, and I think it should be required reading in high school.

The short of Thinking Fast and Slow is that most of the decisions you make, big or small, you don’t make for the reasons you think you made them, and this property of human behavior is a consequence of the way our brains are wired. We have “many brains” in our heads, or rather many subsystems in our brains, each vying for control over our behavior. At the most abstract level (the level at which the book makes its primary distinction) there are two main subsystems that operate in parallel. When the decisions these subsystems make are in conflict, one decision must win out over the other, since we only have one body to control. More often than not the “right” decision is made based on the context—our brains wouldn’t be much good if they were wrong most of the time. But often, especially for modern humans, brains make decisions that seem like the best option to our conscious minds, but are actually suboptimal or detrimental, either immediately or down the road.

Our brains work this way because of how they came to be. Evolution is a necessarily greedy algorithm. It can’t go back to the drawing board when it realizes that a major restructuring would produce a much better outcome, indeed because it can’t have such a realization. It can only make small changes to existing solutions, either modifying a piece of what’s already there slightly, or adding something new on top of it. Of course these small changes accumulate over time to produce an incredibly diverse array of creations, which is what makes it such a powerful algorithm.  When it comes to brains, this greedy process necessitates designing new modes of behavior on top of all the existing modes. The result is a cacophony of voices constantly shouting their orders, with the loudest voices at any given time winning control over the muscles. Marvin Minsky called this The Society of Mind, though there are countless theories and interpretations of this principle in psychology, neuroscience, cognitive science, and artificial intelligence.

What this means for the way we behave, unfortunately, is a whole lot of inner conflict, both conscious and subconscious. The reflexes and impulses that are excellent at catching flies to eat and running away from murderous predators aren’t sensible solutions to complex logical problems that require weighing alternatives from multiple, very deep branches of a possibility space. Yet the parts of our brains that evolved the ability to solve the more complex problems had to be bootstrapped from the older ones that solved the simpler problems. Since the older parts don’t always get kicked to the curb as the new ones come online, all of the parts cast votes for moving our arms and legs and tongues every second of our lives. What makes humans special is that our brains evolved enough new technology to recognize this fact and have it significantly influence the voting process. We can stop, reflect, and invalidate the votes of the older parts of the brain in some cases. This doesn’t come naturally though. It has to be learned and practiced.

Acknowledging this fact and adjusting our behavior accordingly is one of the most important things humans can learn to do, and why the concepts in this book are so important. No one will ever be able to completely overcome the biases built into our brains or the way we learn and perceive our world; that’s a biological impossibility. In the coming decades we will likely design machines that are better at this than we are, or perhaps augment our brains with machinery that makes this feasible. But for now, just recognizing that these biases exist and taking the extra few seconds or minutes to think more objectively through important decisions (even small ones), can have a profound impact on our lives for the better.

Unfortunately the very neural structures that allow us to think slowly and deliberately about complex problems in this way have provided us the means to invent technology that reinforces exactly the opposite behavior. Our current ability to communicate instantly with anyone and everyone, anywhere, at anytime has produced a culture of sound bites, instant gratification, and 140 character summaries of topics that should take pages to explain properly. The deluge of information we receive daily precludes taking the time to understand it properly. We form opinions instantly based on very little information and tout it as fact, and many are proud of their “talent” for making these quick decisions, never doubting their (often low) accuracy.

This type of thinking is epitomized, personified, and glorified by our current president, who reasons almost exclusively using what Kahneman calls System 1—the subconscious, subjective, reactive, quick-acting, emotion-driven decision-making system governed primarily by the evolutionarily older parts of the brain; the fly-catching, predator-escaping, sex-obsessed parts. This is not meant to be a political post. I only use Trump to make the following example. As soon as you read the words “our current president”, you immediately formed a subconscious (and subsequently conscious) opinion about this post. If you lean left, it was likely to some extent a “fuck yes” feeling that resulted in some shade of agreement. If you lean right, or for some other reason are a Trump supporter, it was likely a subconscious eye-roll or middle finger which blossomed into a “this is pretentious bullshit” conclusion that you feel is entirely justified by the fact that I wear gauges and live in San Francisco. The point is, you likely determined your interest in reading the above book based on this reaction, when it in fact it should have little to no bearing on that decision.

The initial subconscious reactions that led to this conclusion were unavoidable. System 1 is always running. You can’t turn it off. You can only override it. My choice of the word “likely” instead of “definitely” in the previous paragraph was made by my System 2—the slow-moving, deliberative, cautious, uncertain, logical, and statistically-aware parts of my brain. If I were generating this post off-the-cuff (or under the influence), my System 1 would have produced something like “All Trump supporters are ignorant System 1 zombies that have no fucking idea what they’re talking about”. This is the immediate, visceral reaction that happens in my brain when I see his name because of the associations with him I have built up over time, and the kind of thing you see on most internet comments. That immediate reaction is unavoidable (barring deliberate, long-term reconditioning). But it would be horrifically irrational for me to let those parts of my brain control my fingers while typing this, just as it would be horrifically irrational of me to grab the crotch of someone I find attractive, but who hasn’t given me permission to do so.

All Trump supporters are not Trump. It is irrational to equate the two and their ideologies without knowing more information about each person individually. Of course it is generally prohibitive to acquire that amount of information, which is exactly why System 1 exists, and why it evolved before System 2. System 1 operates on heuristics—general rules of thumb that are more or less true more often than not. Heuristics (e.g., stereotypes) are extremely useful when high-stakes decisions must be made in seconds or less. These rules mean the difference between life and death for nearly every animal on the planet, but not for most modern humans. Yet most modern humans still use System 1 to make their high-stakes decisions, even though there is plenty of time to let System 2 do its thing.

Part of this has to do with culture. In America at least there seems to be a bizarre marriage of two diametrically opposed attitudes toward decision-making: anti-intellectualism and fear of appearing ignorant. Mainstream media often paints rational thinkers, scientists, and scholars as bookish, intellectual elites that sit in ivory towers in lab coats and disseminate indisputable facts; a separate portion of society from which we obtain some information needed to set policy, but which doesn’t know anything about living in the “real world”. It would require a much longer post to list all of the reasons why this is completely ridiculous. At the same time, it also paints anyone that hesitates in their explanation of complex topics, or provides probabilistic answers conditional on further information or study as incompetent, unconvincing, and wrong. The direct outcome of this is that many people speak with extreme confidence on matters about which they have spent very little, if any time contemplating because they are afraid to say “I don’t know”. Yet they also can’t be bothered to spend the time to understand the issues better because they’re “not a scientist”.

Getting past this barrier is a matter of education. People need to understand not only basic probability and statistics, but all the ways in which their brains conspire against them to subvert the laws of basic probability and statistics. This is precisely what Thinking Fast and Slow attempts to do. Only through understanding how their brain functions can people recognize when System 1 is making their decisions for them and instead take the time to think slower and engage System 2. Hint: it’s pretty often.

Do I think that introducing the principles of this book into high school curricula will produce a significant difference in the behavior of subsequent generations? I don’t know. But I’ve got a good feeling about it. So maybe we should think (slowly) on it.


A lot can happen in 8 months. Shortly after my last post on the day I left HitPoint I was presented with an opportunity to join a new startup in San Francisco working on deep reinforcement learning. Seeing as how I had just left HitPoint to finish up my PhD in reinforcement learning, and was a bit anxious to get out of western MA after so many years there, it wasn’t something I could easily pass up, despite the fact that it would mean having a full time job again while trying to find time to finish my dissertation. And so, after receiving a job offer, I packed up and moved out to SF in June.

I’ve liked living in San Francisco a lot so far. The weather is probably the biggest plus for me, with the metropolitan culture a close second. It’s great to be able to walk to most of the places I need to go in under 20 minutes year round without needing a winter coat or profusely sweating. The cost of living is definitely the worst aspect. My rent more than quadrupled moving out here, but I did manage to find a nice one-bedroom in a high rise just a five minute walk from my new office.

My new position is Research Engineer at Osaro, Inc. We’re developing deep reinforcement learning tech that we plan to apply to difficult real-world problems (e.g., robotics), so that our clients can reap the benefits of recent breakthroughs in machine learning. Our solutions are in the same spirit as some of the work being done at Google’s DeepMind, with notable differences that I’m not currently at liberty to divulge. 🙂 I’m very excited to be a part of this team, and I think we’ll be making big waves in the machine learning and robotics community in the next couple of years. It’s great to be back in the machine learning game and making use of all the knowledge I gained during my doctoral research.

Speaking of which, as expected, having a full time job immediately after leaving my previous one didn’t do much to help with finishing up my dissertation. Although I didn’t finish up this summer as I had hoped, I’m happy to say that just last week I successfully defended my thesis and can now legitimately ask to be called Dr. Vigorito. It’s a great feeling to have that accomplishment under my belt, and even greater to be able to move on to new and exciting things. It was a long time coming, especially given my five year hiatus, but it’s finally done.

So yea. 2015. New job. New city. New degree. Lots of changes. I’m looking forward to all of the exciting changes in 2016.

Onwards and Upwards

It is with numerous mixed emotions that I end today, my last day of work at HitPoint Studios. It’s been a pretty wild ride for the last five years, and I truly appreciate everything I’ve learned and accomplished during my tenure there. Being a member and leader of such a great team has given me so many skills and experiences that I’ll carry with me for the rest of my life. I wholeheartedly appreciate all of the insanely hard work and assistance every member of the HitPoint team has put in over the years, and I hope to stay in touch with all of them.

I am leaving HitPoint to spend the next few months finishing up my PhD at UMass Amherst, which has long been languishing in the background of my psyche, and then moving on to new and exciting things yet to be determined. It’s a bit of an uncertain time for me, but I look forward to the challenges that uncertainty presents.

To all the HitPointers on the kick-ass team I’m leaving behind, I have no doubt you will continue to kick ass and turn out great games for tons of eager fans. Best of luck to you all!

Stay healthy. Stay hungry. Stay in touch. Ciao!

Fun with GarageBand

I had U2’s “Running to Stand Still” stuck in my head for a couple of days. Last night I did this to try to get it out, courtesy of an instrumental version from and some messing around in GarageBand. Apparently adding a little reverb to your voice can make you sound halfway decent, even using a laptop mic. Now I just need to learn how to mix tracks properly.

Anyway the experiment was a failure. It’s still stuck in my head.

New Site!

Since I splurged and got my own domain and some personal web space I decided it was time to move all my crap over from my UMass site. Welcome to my new one! I’m still learning this WordPress thing, but I think I’ve got the basics down. Still experimenting with this theme, but so far I like it. Check out my other pages (Academics and Game Dev) if you like. More to come!