I’m a huge fan of the intersection of science, technology, and art—where the distinguishing traits of humanity come together to produce some of the most awe-inspiring creations in our known universe. A couple of years ago I discovered an inspiring piece of engineering and art which aims to visualize the complexity and elegance of the human brain and the beautifully choreographed ballet of information that continuously travels through its billions of neurons as you experience each moment of your life.

Created by neuroscientist and artist Greg Dunn, the piece, titled Self-Reflected, struck all the right chords for my tastes and interests. I hemmed and hawed about buying it for over a year before finally deciding that I would splurge for Christmas and use it as an excuse to undertake a bit of a hobby project for myself. You can find all the details you might care to know about Self-Reflected and how it was made here. The rest of this post is about my efforts to get the most out of it. If you’re not interested in the details, you can just watch the video of the final result above.



Self-Reflected is an artistic rendering of an oblique mid-sagittal slice of the human brain; here is an image showing the location and orientation of such a slice. The piece is physically realized as a micro-etched print, which means that a fixed light source pointed at it is reflected differentially at neighboring points very close together on the etching. This technique produces visually interesting effects even with a static light source, but is most evidently impressive when the light source is moved relative to the etching.

The movement of a light source from side to side produces an animated effect in the etching that brings the rendering to life in a surprising and visceral way, giving the appearance of electrical impulses traveling along the axons and dendrites of the neurons depicted in the etching. Varying the intensity, speed, and color of the light source produces an endless array of animations, some of which you can see in the video I recorded above.

Since the purchase of Self-Reflected includes only the etching itself, I needed to build a lighting rig to mount over it in order to realize its full potential. I’ve documented the steps I took and design choices I made when building the lighting rig and control unit here for anyone potentially interested in doing something similar.

Lighting and Power

In order to be able to enjoy the piece from a reasonable vantage point, I needed to animate a light source programmatically, rather than stand over the etching and wave a light back and forth manually (this would get tiring). I did some brief searching, asked a friend, and found that NeoPixels were a popular choice for artistic lighting projects. NeoPixels are individually addressable LED lights that can be controlled via digital micro-controllers like an Arduino or Raspberry Pi. Technically NeoPixels are AdaFruit’s brand of addressable RGB LEDs using the WS2812 drivers/protocol. They are shipped in various configurations, but most commonly as a linear strip, which is exactly what I needed.

I purchased a one meter strip of NeoPixel equivalents and started reading up on what I needed to program them. AdaFruit’s site was super helpful in figuring out what I needed and how to put everything together. They recommended powering the strip separately from the micro-controller used to control them, since the LEDs need a lot more power than the chip. I purchased a 5V 2A switching power supply to power the strip, a female DC power adapter to connect to the leads on the strip, and a 4700uF 10V capacitor to put across the terminals; the last component was recommended by AdaFruit to prevent any initial rush of current from damaging the pixels.

There are options for powering the NeoPixels via batteries, but since the rig was going to be mounted stationary over the etching I didn’t bother exploring them much. I could just leave the whole thing plugged in all the time and not worry about charging batteries, though the cables are admittedly a bit ugly.

With these parts assembled, I connected the power supply, adapter, and capacitor to the strip and plugged it in, lighting up the strip. So far so good. Now I needed to figure out how to control them.


I wanted to be able to control the lighting rig from my phone, both to avoid needing to get up on a chair to push buttons on the controller and to be able to customize properties of the lights easily. I looked up some popular micro-controllers and settled on AdaFruit’s Feather Huzzah ESP8266 which is sufficiently small and has a built-in WiFi module. Once I had the Feather, I connected it to my laptop over USB and followed AdaFruit’s guide to interacting with it using the Arduino IDE. Now I needed to connect the NeoPixel strip to the Feather.

I soldered connector wires from the ground and data leads on the strip to the appropriate pins on the Feather. At this point I was able to turn both the strip and the Feather on without anything catching on fire, and they seemed to work properly. The strip still only turned on with all pixels at full white though. To make any changes to their color and brightness I needed to actually send some data to them.

The NeoPixels Arduino library is open source and lets you program a set of NeoPixels from an Arduino through a simple interface. I loaded one of the samples in the library onto the Feather through the Arduino IDE to test the full setup and things seemed to work fine. Two things left: write a program to move the lights in a pattern that best fits the purpose of Self-Reflected, and find a way to customize a few properties of this program over WiFi so I don’t need to make code changes to adjust them.

For the latter step I settled on the Blynk IoT platform, which provides a user-friendly way to create widgets in a phone app that you can tie to “virtual pins” on your Arudino by writing functions that reference Blynk’s libraries to send/receive data to/from those “pins”. Blynk is a paid service, but free for a single-user, single-device project, which is all I needed. Here’s a shot of the set of widgets I chose for the lighting controls.

The widgets let me turn the whole strip and off, turn the light chase animation on and off, set the color and brightness of the lights, the speed of the chase animation, and the width of the little Gaussian bumps that produce the animation effect when they move across the strip.

The Arduino program that animates the lights and communicates with the Blynk app is fairly simple. Here’s a gist of the code, with my network details redacted.

With the control unit working and the code written, the last step was to mount the whole rig and fine tune the settings.


I needed to mount the light strip above the etching, facing down toward the ground to get the proper effect. This required a custom mount, which I built from scrap wood and a small hinge I got from Home Depot.

I wanted the whole mount to be easily detachable from the wall to make servicing and experimenting with the light strip easy. The base of the mount is a horizontal wooden bar, which I just hook onto a couple of screws in the wall using picture mounting brackets I screwed into the back of the bar. A cross bar comes out from the base bar to put distance between the wall and the light strip. The mounting bar for the light strip is a long (4 feet) thin piece of wood slightly wider than the light strip itself, and this bar is attached to the cross bar with a small metal hinge so that I could modify the angle between the strip and vertical somewhat after construction without needing to recut anything. I stained the whole mount structure with a dark wood stain to better match with the etching frame and my furniture.

I mounted the NeoPixel strip to the cross bar using a metal casing strip designed to hold the light strip flat in place. The casing comes with a translucent strip cover that slides over the casing to smooth out the light coming from the strip and make it seem more continuous, rather than a sequence of individual LEDs.

I wanted some kind of case to put the Feather and connecting wires in so that I didn’t have to attach them directly to the wood and have loose wires hanging off of it. I found this page on AdaFruit’s site providing modular CAD models of different types of cases for the Feather, which could be 3D printed. I downloaded the parts I wanted (the Feather case with mounting tabs and the topper with header holes) and had them 3D printed by 3D Hubs for a reasonable price.

In the end, because I’m a terrible electrical engineer and not much of a handyman, the case didn’t wind up providing much in the way of cleaning up the design of the mount, but it’s better than nothing. There are still wires sticking out un-aesthetically, but they’re not really visible from below when it’s mounted above the etching. Things aren’t perfectly straight either, but I’m calling diminishing returns on spending more time on it. Here are a couple of photos of the final (janky) version of the mount (yes there was duct tape involved).

End Result

After sneaking an hour or two here and there every couple of weeks since Christmas working all of these steps out, I finally finished the damned thing. Or at least I’ve put as much time into it as I care to. The video at the top of the post gives you a sense of the piece as it was meant to be viewed (I hope). Below is a photo of the final result mounted over the etching (yes I know it’s a little crooked; diminishing returns). I learned a few things working on this, but mostly I’m happy that I now have an animated brain in my bedroom.



Think Slower

tl;dr: Read this book, even if it’s the only one you ever read.

Earlier this year I read The Undoing Project by Michael Lewis (author of Moneyball and The Big Short), an account of the unique relationship between the author of the above book, Daniel Kahneman, and his colleague Amos Tversky. It details their collaboration in redefining theories of decision making and behavioral economics in the 70s and 80s. Thinking Fast and Slow, which I read last year, is an excellent compendium of Kahneman and Tversky’s research, and I think it should be required reading in high school.

The short of Thinking Fast and Slow is that most of the decisions you make, big or small, you don’t make for the reasons you think you made them, and this property of human behavior is a consequence of the way our brains are wired. We have “many brains” in our heads, or rather many subsystems in our brains, each vying for control over our behavior. At the most abstract level (the level at which the book makes its primary distinction) there are two main subsystems that operate in parallel. When the decisions these subsystems make are in conflict, one decision must win out over the other, since we only have one body to control. More often than not the “right” decision is made based on the context—our brains wouldn’t be much good if they were wrong most of the time. But often, especially for modern humans, brains make decisions that seem like the best option to our conscious minds, but are actually suboptimal or detrimental, either immediately or down the road.

Our brains work this way because of how they came to be. Evolution is a necessarily greedy algorithm. It can’t go back to the drawing board when it realizes that a major restructuring would produce a much better outcome, indeed because it can’t have such a realization. It can only make small changes to existing solutions, either modifying a piece of what’s already there slightly, or adding something new on top of it. Of course these small changes accumulate over time to produce an incredibly diverse array of creations, which is what makes it such a powerful algorithm.  When it comes to brains, this greedy process necessitates designing new modes of behavior on top of all the existing modes. The result is a cacophony of voices constantly shouting their orders, with the loudest voices at any given time winning control over the muscles. Marvin Minsky called this The Society of Mind, though there are countless theories and interpretations of this principle in psychology, neuroscience, cognitive science, and artificial intelligence.

What this means for the way we behave, unfortunately, is a whole lot of inner conflict, both conscious and subconscious. The reflexes and impulses that are excellent at catching flies to eat and running away from murderous predators aren’t sensible solutions to complex logical problems that require weighing alternatives from multiple, very deep branches of a possibility space. Yet the parts of our brains that evolved the ability to solve the more complex problems had to be bootstrapped from the older ones that solved the simpler problems. Since the older parts don’t always get kicked to the curb as the new ones come online, all of the parts cast votes for moving our arms and legs and tongues every second of our lives. What makes humans special is that our brains evolved enough new technology to recognize this fact and have it significantly influence the voting process. We can stop, reflect, and invalidate the votes of the older parts of the brain in some cases. This doesn’t come naturally though. It has to be learned and practiced.

Acknowledging this fact and adjusting our behavior accordingly is one of the most important things humans can learn to do, and why the concepts in this book are so important. No one will ever be able to completely overcome the biases built into our brains or the way we learn and perceive our world; that’s a biological impossibility. In the coming decades we will likely design machines that are better at this than we are, or perhaps augment our brains with machinery that makes this feasible. But for now, just recognizing that these biases exist and taking the extra few seconds or minutes to think more objectively through important decisions (even small ones), can have a profound impact on our lives for the better.

Unfortunately the very neural structures that allow us to think slowly and deliberately about complex problems in this way have provided us the means to invent technology that reinforces exactly the opposite behavior. Our current ability to communicate instantly with anyone and everyone, anywhere, at anytime has produced a culture of sound bites, instant gratification, and 140 character summaries of topics that should take pages to explain properly. The deluge of information we receive daily precludes taking the time to understand it properly. We form opinions instantly based on very little information and tout it as fact, and many are proud of their “talent” for making these quick decisions, never doubting their (often low) accuracy.

This type of thinking is epitomized, personified, and glorified by our current president, who reasons almost exclusively using what Kahneman calls System 1—the subconscious, subjective, reactive, quick-acting, emotion-driven decision-making system governed primarily by the evolutionarily older parts of the brain; the fly-catching, predator-escaping, sex-obsessed parts. This is not meant to be a political post. I only use Trump to make the following example. As soon as you read the words “our current president”, you immediately formed a subconscious (and subsequently conscious) opinion about this post. If you lean left, it was likely to some extent a “fuck yes” feeling that resulted in some shade of agreement. If you lean right, or for some other reason are a Trump supporter, it was likely a subconscious eye-roll or middle finger which blossomed into a “this is pretentious bullshit” conclusion that you feel is entirely justified by the fact that I wear gauges and live in San Francisco. The point is, you likely determined your interest in reading the above book based on this reaction, when it in fact it should have little to no bearing on that decision.

The initial subconscious reactions that led to this conclusion were unavoidable. System 1 is always running. You can’t turn it off. You can only override it. My choice of the word “likely” instead of “definitely” in the previous paragraph was made by my System 2—the slow-moving, deliberative, cautious, uncertain, logical, and statistically-aware parts of my brain. If I were generating this post off-the-cuff (or under the influence), my System 1 would have produced something like “All Trump supporters are ignorant System 1 zombies that have no fucking idea what they’re talking about”. This is the immediate, visceral reaction that happens in my brain when I see his name because of the associations with him I have built up over time, and the kind of thing you see on most internet comments. That immediate reaction is unavoidable (barring deliberate, long-term reconditioning). But it would be horrifically irrational for me to let those parts of my brain control my fingers while typing this, just as it would be horrifically irrational of me to grab the crotch of someone I find attractive, but who hasn’t given me permission to do so.

All Trump supporters are not Trump. It is irrational to equate the two and their ideologies without knowing more information about each person individually. Of course it is generally prohibitive to acquire that amount of information, which is exactly why System 1 exists, and why it evolved before System 2. System 1 operates on heuristics—general rules of thumb that are more or less true more often than not. Heuristics (e.g., stereotypes) are extremely useful when high-stakes decisions must be made in seconds or less. These rules mean the difference between life and death for nearly every animal on the planet, but not for most modern humans. Yet most modern humans still use System 1 to make their high-stakes decisions, even though there is plenty of time to let System 2 do its thing.

Part of this has to do with culture. In America at least there seems to be a bizarre marriage of two diametrically opposed attitudes toward decision-making: anti-intellectualism and fear of appearing ignorant. Mainstream media often paints rational thinkers, scientists, and scholars as bookish, intellectual elites that sit in ivory towers in lab coats and disseminate indisputable facts; a separate portion of society from which we obtain some information needed to set policy, but which doesn’t know anything about living in the “real world”. It would require a much longer post to list all of the reasons why this is completely ridiculous. At the same time, it also paints anyone that hesitates in their explanation of complex topics, or provides probabilistic answers conditional on further information or study as incompetent, unconvincing, and wrong. The direct outcome of this is that many people speak with extreme confidence on matters about which they have spent very little, if any time contemplating because they are afraid to say “I don’t know”. Yet they also can’t be bothered to spend the time to understand the issues better because they’re “not a scientist”.

Getting past this barrier is a matter of education. People need to understand not only basic probability and statistics, but all the ways in which their brains conspire against them to subvert the laws of basic probability and statistics. This is precisely what Thinking Fast and Slow attempts to do. Only through understanding how their brain functions can people recognize when System 1 is making their decisions for them and instead take the time to think slower and engage System 2. Hint: it’s pretty often.

Do I think that introducing the principles of this book into high school curricula will produce a significant difference in the behavior of subsequent generations? I don’t know. But I’ve got a good feeling about it. So maybe we should think (slowly) on it.


A lot can happen in 8 months. Shortly after my last post on the day I left HitPoint I was presented with an opportunity to join a new startup in San Francisco working on deep reinforcement learning. Seeing as how I had just left HitPoint to finish up my PhD in reinforcement learning, and was a bit anxious to get out of western MA after so many years there, it wasn’t something I could easily pass up, despite the fact that it would mean having a full time job again while trying to find time to finish my dissertation. And so, after receiving a job offer, I packed up and moved out to SF in June.

I’ve liked living in San Francisco a lot so far. The weather is probably the biggest plus for me, with the metropolitan culture a close second. It’s great to be able to walk to most of the places I need to go in under 20 minutes year round without needing a winter coat or profusely sweating. The cost of living is definitely the worst aspect. My rent more than quadrupled moving out here, but I did manage to find a nice one-bedroom in a high rise just a five minute walk from my new office.

My new position is Research Engineer at Osaro, Inc. We’re developing deep reinforcement learning tech that we plan to apply to difficult real-world problems (e.g., robotics), so that our clients can reap the benefits of recent breakthroughs in machine learning. Our solutions are in the same spirit as some of the work being done at Google’s DeepMind, with notable differences that I’m not currently at liberty to divulge. 🙂 I’m very excited to be a part of this team, and I think we’ll be making big waves in the machine learning and robotics community in the next couple of years. It’s great to be back in the machine learning game and making use of all the knowledge I gained during my doctoral research.

Speaking of which, as expected, having a full time job immediately after leaving my previous one didn’t do much to help with finishing up my dissertation. Although I didn’t finish up this summer as I had hoped, I’m happy to say that just last week I successfully defended my thesis and can now legitimately ask to be called Dr. Vigorito. It’s a great feeling to have that accomplishment under my belt, and even greater to be able to move on to new and exciting things. It was a long time coming, especially given my five year hiatus, but it’s finally done.

So yea. 2015. New job. New city. New degree. Lots of changes. I’m looking forward to all of the exciting changes in 2016.

Onwards and Upwards

It is with numerous mixed emotions that I end today, my last day of work at HitPoint Studios. It’s been a pretty wild ride for the last five years, and I truly appreciate everything I’ve learned and accomplished during my tenure there. Being a member and leader of such a great team has given me so many skills and experiences that I’ll carry with me for the rest of my life. I wholeheartedly appreciate all of the insanely hard work and assistance every member of the HitPoint team has put in over the years, and I hope to stay in touch with all of them.

I am leaving HitPoint to spend the next few months finishing up my PhD at UMass Amherst, which has long been languishing in the background of my psyche, and then moving on to new and exciting things yet to be determined. It’s a bit of an uncertain time for me, but I look forward to the challenges that uncertainty presents.

To all the HitPointers on the kick-ass team I’m leaving behind, I have no doubt you will continue to kick ass and turn out great games for tons of eager fans. Best of luck to you all!

Stay healthy. Stay hungry. Stay in touch. Ciao!

New Year, New HitPoint

The wheels are in motion! 2014 was a bit of a roller-coaster for HitPoint. We hit some pretty awesome goals, but faced our fair share of challenges and hardships as well. The upheaval at Microsoft earlier this year, particularly in their gaming division, hit us hard and was a major impetus for our decision to seek new VC funding. Thanks to members of the Western MA investor network River Valley Investors and the Springfield Venture Fund created by MassMutual, as well as other generous investors, we were able to secure enough funding to finish off the majority of our work-for-hire projects and focus the team exclusively on HitPoint-owned games through early 2015. This is both a liberating and challenging position for us to be in, but we’re embracing it with full enthusiasm and determination.

Our Facebook games are doing well and part of the team is heads down on making sure those experiences continue to be enjoyable to our players. If you haven’t checked out Seaside Hideaway, Jane Austen Unbound, Relic Quest, or Fablewood, the latter of which we recently released on iPad, give them a go and let us know what you think.

We have also licensed a mobile game that we developed for DeNA called Hell Marys, which we will be re-releasing this January. The game will be available on Android and iOS devices. The content is definitely a departure from most of the other game genres we’ve worked on, but it has been a great experience for the team to work on something different. While it’s definitely a game for mature audiences only, we’re excited to be tackling new territory and giving it the HitPoint level of polish it deserves. Look out for another post in January when we launch.

The most exciting part of our new focus, which the majority of the team has been working on the past few months, is our “next big thing”.  More details will be forthcoming in February when we announce the new product at Casual Connect Europe in Amsterdam, but for now suffice it to say it’s a very ambitious project we think is going to make a big splash in the mobile space next year. We’re all thrilled to be working on it and are doing our best to make it a fantastic user experience.

Finally, with the new year and our new direction come two other big changes for HitPoint. The first is our new logo, which is changing for the first time since HitPoint was founded in 2008. Check out the new hotness in this post’s featured image above, courtesy of one of HitPoint’s amazing artists Steve Forde. Our new branding really captures the playfulness and accessibility of our games while still paying homage to our old-school gaming roots, and we’re really happy for it to be a new way to get the HitPoint name out to the world.

The second major change is our relocation (again!). After just over a year and a half in our current space in Amherst, which we’re all quite fond of, we’ll be moving to a new office space located in the heart of Springfield’s financial district at One Financial Plaza. It’s a fantastic space with an amazing view that you can see somewhat in the photos below, which show our new suite as it is right now, under construction. What’s even more amazing is that all of this construction is happening over the course of just a couple of weeks, and we’re all set to move in this weekend! Thanks to our investors at RVI and MassMutual for making this a reality. Everything is happening very fast, but being able to start the new year in this awesome new space is a great way for us to kick off 2015, which we think will be a huge year for HitPoint. Stay tuned for more announcements as we bring all of these threads together next year to make some amazing products.

Under Construction

Under Construction




Fablewood is now on iPad!

HitPoint’s most recent independent game Fablewood is now available on iPad! Previously free to play only on Facebook, Fablewood is a hidden object game in which you banish evil from famous fairy tale stories you know and love, and build up your own enchanted forest. Now you can play for free on your iPad.

Our team has done an amazing job getting the game running smoothly on Apple’s tablet, and it looks fantastic on devices with retina displays. Fablewood is available on iPad 2 and higher, and requires iOS 7 or higher. You can check out some screen shots below.

If you’re already playing on Facebook, or want to play on both your iPad and your computer, Fablewood will keep all of your game data synced between your devices. Just sign into the app with Facebook on your iPad!

So if you’ve got an iPad, grab the game from the App Store and try it out. If you like it, we’d be super appreciative if you gave it a good review! If you don’t have an iPad, you can still play on Facebook, and visit our community page to leave your comments or like Fablewood. We hope you enjoy it!

Fablewood iOS Screenshot

Fablewood iOS Screenshot

Fablewood iOS Screenshot

Fablewood iOS Screenshot


Fun with GarageBand

I had U2’s “Running to Stand Still” stuck in my head for a couple of days. Last night I did this to try to get it out, courtesy of an instrumental version from and some messing around in GarageBand. Apparently adding a little reverb to your voice can make you sound halfway decent, even using a laptop mic. Now I just need to learn how to mix tracks properly.

Anyway the experiment was a failure. It’s still stuck in my head.

DreamWorks Dragons Adventure Takes Flight

HitPoint has just finished up development on a new mobile game based on the DreamWorks movie How to Train Your Dragon 2. The game, Dragons Adventure: World Explorer, is currently available as a free download on select Windows 8 tablets and Windows Phone 8 devices. Dragons Adventure is the result of a collaboration between HitPoint, DreamWorks, and Microsoft, and has been in the works for several months. The game has several pretty unique and innovative features, mostly centering around incorporating location-based, live data into an augmented reality experience.

In Dragons Adventure: World Explorer you play as Hiccup or Astrid and fly dragons from the DreamWorks franchise over a 3D-rendered map of the real world styled to look like Hiccup’s home island of Berk. Local points of interest show up in the world as medieval buildings. It’s not uncommon to fly over a Starbucks transformed into a medieval mead hall, or a Hilton styled to look like an ancient viking inn. You’ll even see different numbers of vikings milling outside those locations based on how popular the venue is. The game also factors in the time of day and weather conditions of the place in the world you’re traveling through, so while it might be a bright sunny afternoon outside your house, you can navigate your dragon though a rainy evening in Paris.

The most interesting part of Dragons Adventure though can only be experienced on a road trip. Hop in the car, select a destination, and the game will use your device’s GPS to plot a route out for you, complete with a viking ox-drawn cart that travels along with your car as you drive (just make sure you’re playing as a passenger!). Along your route you’ll receive quests to rescue stranded vikings, pick up lost sheep and bring them back to your cart, attack evil viking towers to free friendly imprisoned dragons, and explore the area around you to find new local hotspots you may not have known about.  Device sensor controls let you fly your dragon by tilting the device, just as if you were holding Toothless’ reins yourself! Check out some screenshots from the game below, and download it here if you’ve got a Microsoft Surface or Nokia tablet, or here if you’ve got a compatible Windows Phone 8.

Developing Dragons Adventure was both a huge challenge and a great experience for the HitPoint team. It’s the first real-time 3D game of this scope that we’ve developed, and there was no shortage of technical hurdles we had to overcome.  The biggest challenges came from the integration of the various real-world data streams into the game. The world map uses data pulled from HERE Maps to paint Berk-themed textures onto the game world’s terrain. The viking buildings that pop up throughout the game are populated based on local point-of-interest data from Foursquare. The local weather is obtained from a live feed. Getting all of these features integrated, playing nicely with each other, and incorporated into the game’s quest design was no small task, but I think HitPoint did an amazing job pulling it off, and so far those that have played the game have had similar opinions.

We’re definitely proud of the product we were able to build with DreamWorks and Microsoft, and hopefully there will be some more interesting updates to come down the road. In the meantime if you’ve got a compatible device go check the game out and let us know what you think!

DAWE Screenshot



DAWE Screenshot

DAWE Screenshot


Fablewood – HitPoint's Newest Game Out Now!

HitPoint is extremely happy to announce the release of our latest game, Fablewood. Free to play on Facebook now, Fablewood is a hidden object game where you banish evil from famous fairy tale stories you know and love, and build your own enchanted forest! We’ve worked incredibly hard over the past several months to make Fablewood as beautiful and fun as it is, and we’d love for you to try it out and give us your feedback. Just click the image above to go to the app and play.

Please visit our community page to leave your comments, and click the like button if you have fun playing! Don’t hesitate to invite your friends either. We’ve got a ton of new content in the works for Fablewood that we’ll be releasing over the coming months, including numerous hidden object scenes from famous fairy tales. We’re also working hard on an iOS version of Fablewood that will be available early next year, so you’ll be able play Fablewood on the go!

If you’re still looking for a reason to check it out, take a look at some of the amazing artwork our team created for Fablewood below. There are tons more of these fantastic hidden object scenes, beautiful town buildings, and quirky items you can use to build and decorate your enchanted forest. We hope you enjoy playing the game as much as we had fun making it!







Disney Fairies – Fourth Adventure Available!

A new adventure in Disney Fairies: Hidden Treasures is now available. In North of Neverland, Tinkerbel’s ambitious plan to journey alone to the island north of Never Land soon runs into trouble when she finds herself stranded in the wilderness without pixie dust. Will Tink be able to find the Mirror of Incanta in the old pirate shipwreck in time to save the Autumn Revelry?

HitPoint has continued to put our all into making each new episode even better than the last. We definitely appreciate your feedback, so let us know what you think of it. Just purchase the new adventure from the game’s main menu and enjoy. Stay tuned for the next and final adventure of this season in the coming months!