Moonshot - Barkley Labs

| Comments

It was time to put our MakerBot 3d Printer to use doing typography.

To get going we downloaded some existing type and tried that out. The green “make” is from this design by 9999NYC.

Then we tried this 3D typeface from LorenIPSVM in red.

Moving into some custom typefaces we had the incredible Casey Lingon supply us with this maxim we strongly believe in. Stay curious was printed in Natural PLA, sprayed white and then we used a roller to apply some black PlastiDIP to the face.

The word “JOE” was extruded from Angelface and then painted green. Cady said it looked like it should be part of GI Joe so we printed a quick Army Dude to watch over it.

Then we printed this from a font called Wendy and painted it with some colors we had around.

Hope to explore some more typography with 3d printing soon.


Inst-Int 2014

| Comments

Inst-Int 2014 was an inspirational experience. In only 2 days, the conference showcased projects from around the world which blended art and technology. Laser harps, tangible 3d modeling, and multi story interactive projection mapping were only a few of the topics covered.

Some of my favorite highlights from the event


G-Speak from Oblong: A spatial gestural operating system straight out of minority report.


inForm from MIT’s Tangible Media Group: Changing the way people interact with computers by replacing the screen with something real.


Ecstatic Epiphany by Micah Scott: A subtle, beautiful LED experience which gently reacts to human presence.


Patterned by Nature by SoSoLimited: An enormous LCD sculpture which abstracts and animates patterns commonly found in nature.

Everything at Inst-Int was visually impressive, but surprisingly, one of my favorite parts of each presentation was the explanation of logistics. How many people needed to be involved? How long did it take to get the contract signed? How much did the project cost? How much of the work did you outsource? Every “how” questions was invaluable in describing the make process. These details help reveal the workflows and decision making behind each project and I was happy to hear several presenters cover these questions.

Inst-Int also provided 3 additional workshops before the official opening of the event, one of which was creative coding with the Kinect v2. The Microsoft team that made the trip to Minneapolis was helpful in explaining the capabilities of the new device and gave great live demos of how to use the Kinect v2 with Cinder, openFrameworks, and even Unity. Can’t wait to start programming with the new Kinect (might be working on something today ;) )!

A series of tutorials for the Kinect v2
Kinect v2 Open Bridge, a C++ Binding
Kinect v2 with Cinder

Inst-Int also posts all talks from the conference for free, for everyone. The 2014 event is not yet posted, but last years is available.

Making Innovation Happen

| Comments

Moonshot began as an experimental lab, but as we’ve experimented with our own model and offerings, we’ve evolved. We’ve developed a robust innovation practice, helping clients explore innovation opportunities not only in technology, but also in experience design and product innovation. Our innovation process fuses elements from Human Centered Design, Design Thinking, Maker Culture, Lean Startup, and our own programs, such as Rotation Week

1. Empathy and Insights


Our process begins with establishing a deep understanding of the people for whom we are designing a solution. We want to understand not only their needs, but the emotional depth and context that surrounds them.  We create a nuanced portrait of our users through ethnographic surveys, interviews, and site visits. We’re looking to understand not only the explicit needs, but their implicit needs as well to generate a detail-dense description of our user that will provide the fuel we need to inspire the next phase of our process.

2. Interpretation and Ideation


We frequently transition to the next phase with a workshop.  We convene a carefully curated collective comprised of Barkley partners, client stakeholders, relevant outside provocateurs, and the Moonshot team. As a group, we begin by unpacking the learnings and identifying meaningful themes, insights, and opportunities. Next we craft a written Point of View: a concise, compelling expression of our new perspective on the user and her/his needs. We then generate lots of ideas in response to our Point of View. We brainstorm collectively and iteratively. We give voice to our boldest wonderings. We imagine possible futures. By the conclusion of the workshop we have uncovered many of the essential qualities, features, and components that should comprise the experience. 

3. Prototyping and Testing


In the final stage of our process, we make our ideas tangible in the form of prototypes and share them with users for feedback. A prototype is a key moment in the design process - a bridge between concept ideation and project implementation. We execute our prototypes in quick test-and-learn sprints. At the start of each sprint, we identify a testable component of the overall experience, we design and build it to the degree necessary to demonstrate the concept, and we launch it on a moderated testing platform to a group of users. As the sprints progress, we ramp up the fidelity and level of complexity. The ideas and executions evolve and grow in response to real-time user feedback. This way of working allows us to examine micro moments of interaction and evaluate their viability within the holistic system. The culmination of this phase is an integrated test that allows us to assess how the elements of the experience work together and to layer on critical business viability analyses to ensure that the experience not only meets user desires, but will result in a strong, viable business as well.

At the conclusion of this third phase, we have a working prototype of our experience that has been vigorously user-tested and validated against the business scenario. The ground work for product launch is purposefully laid, and we are off looking for our next challenge! 

| Comments

We asked our intern, Gene, how he felt about his internship ending this week. 

| Comments

"We don’t need no education…"
The Wall of Rawk looms large in the lab. #usairguitar #bsocial  (at Moonshot Lab)

"We don’t need no education…"
The Wall of Rawk looms large in the lab. #usairguitar #bsocial (at Moonshot Lab)

3D Printing Class

| Comments

Not very long ago it would have been practically impossible to do what we’ve done three times in the last two months at Moonshot Lab - we brought in interested partners from pretty much every department of the agency and in just two hours taught them enough skills to be able to 3D print their own ideas.

We’ve had 3D printing capacity here for a few years and although our PrinterBot tried to serve us bravely, it was just too delicate and way too hard to use. 


Then a few months ago we got a new 3D printer. We chose the Makerbot Replicator 5th Generation  because they have really gone to great lengths to make a beginner friendly machine. It is robust, polished and kind of idiot proof. 


The class starts with some theory and context about the significance and practicality of 3D printing. We then teach our partners to set up the Makerbot and set up a print. There are a number of physical examples to explain the need for functions such as supports and rafts as well as the different resolutions.


We then cover the basics of 3D CAD modeling with Autodesk’s free online TinkerCAD. Everyone gets to the skill level where they can open a file from a library, edit it a bit and then add some artwork from an SVG file. We then learn how to get that file saved and prepared for printing using Thingiverse


The rest of the class is spent teaching how to 3D scan classmates using our handheld SENSE scanner from Cubify. The 3D scans can be used to print out artifacts. We’ve even printed some scans as PEZ dispenser tops for students.


"I loved this class! It was fantastic. You covered all of the basics of 3D printing in a manageable amount of time, plus we got to play with 3D scanning.

I appreciated that we went through the process from start to finish and I feel confident that I could work the 3D printer without messing things up. I liked that you stepped us through the process of altering a project that already exists. It gave me a good idea of how the software works without having to think of something to make on my own. I feel confident that I can figure out the process of creating a file from scratch based on the tutorials you mentioned and the steps you walked us through in class.

The PEZ dispenser of myself is super fun! I’ll be the envy of all my friends. Thank you!”


- Ricky Catto

Internet of Wild Things Class

| Comments

We held our very first internet of wild things class yesterday, a workshop  which teaches people how to create machines out of everyday objects and connect them to the internet. Participants entered the class with no electronics or programming experience and were asked to bring in some kind of found object from home - a bottle of sunscreen, legos, a disco ball, and a sewing machine were some of my favorites.

The class started with fundamental electronics concepts to get people excited and confident about making. Together, we wired a few LEDs and used a button to control a light. No one even got shocked!

Look ma, no code

A brief introduction into micro controllers moved us into manipulating components via processor logic. Within a few minutes we were using a web interface to configure and load code onto our spark cores, magically enabling us to control elements of our projects via a web page with a few buttons!


The application used to configure and load code onto the spark cores is a moonshot product and is available as a public project on github.

If I press a button in the real world, make a tweet

After everyone was comfortable making basic electronic circuits, we began creating if this, then that style web flows via Zapier. This gave students a wide variety of web hooks including Gmail, Twitter, text messaging, phone calls, Facebook, and many more.

After this 90 minute introduction, each student started working on their own projects.

Wear Sunscreen

Everyone in the class had great ideas, but two of my favorites were:

1. Sara Buck brought in a bottle of sunscreen and used a force sensitive resistor to detect when the lotion was applied. The bottle would then schedule a reminder on her google calendar set for 2 hours into the future. After two hours, the calendar sends her a text message reminding her to reapply to prevent sunburn.

2. Stacey Kledis brought in her sewing machine and began to create an application which logged the number of stitches and length of use. The machine also attempted to detect the type of fabric used by sensing pressure.

Unobtrusive Face Detection (Part 1): Computer Training

| Comments

Demos of emerging technologies frequently ask a person to stop what they’re doing and experience something new. Successful applications are typically fun and engaging, but we at Moonshot wanted to take our face recognition experience to the next level. We wanted to create something which required no action from the end user, it needed to be completely invisible and appear to magically just work.

Imagine an experience where a person unknowingly passes by a machine. The machine verbally says:

Hi Cady. Your timesheet is late. Just like it was last week, and the week before that, and the week before that…


Good morning Ricky, I know you were the one who broke the coffee machine.

Creepy, strange, potentially helpful? But wait a second… how did the computer recognize Cady? How did it know Ricky broke the coffee machine?

Training the Computer to recognize faces

In order to accurately detect a person, the computer needs to create a reference between a photo and a person’s profile - Ricky’s face needs to be paired with Ricky’s profile.

We’ve installed a hidden camera under the stairs at the main entrance of Barkley. A motion detector acts as a trip wire which snaps a quick photo of the passerby. These photos are put into a queue where the Moonshot team matches each face to a profile.

Populating the database with talking points

Making a computer deliver intensely personalized messaging is difficult. We thought about polling social media statuses, scraping profile data from our intranet, and looking up police records for each employee.

None of this felt personal enough because the computer doesn’t really know Ricky, but I do. We built a small web application to help us create talking points for each person. The Moonshot team can select a person and type in a personalized message.


Having a computer accurately capture a photo of a face turned out to be the most difficult part of this project. We started out with a nice web cam, but eventually gave up because auto-focus and lack of zoom were too problematic. We replaced the webcam with a DSLR on all manual settings and got much better results.


We wanted to keep the physical footprint of this project small, so we elected to use a Raspberry Pi. It’s a great little computer, but we are definitely stretching it’s capabilities. Running a motion detector, camera (Canon EOS Rebel), web server (node), database server (mongo), and OpenCV all at the same time pretty maxes out the Pi’s processor. Luckily, we’ve only destroyed one machine so far.

Coming up Next

As soon as we’ve populated a substantial database of users and photos, we’ll begin to turn on the talking points. Part 2 coming soon!

If you’d like to keep track of the latest code releases or see the technology behind this application, check out the github repo.




- Joe Longstreet (@joelongstreet)

Vultures Rejoice, Introducing the Eddie-o-Matic

| Comments

Eddie is our concierge here at Barkley. He knows who’s in the office, what clients will be in town, and most importantly, where the free food is after an important meeting.

On those frequent occasions where there are leftover muffins, cake, or any other treat, Eddie will send out a company wide e-mail to let everyone know. Unfortunately the food goes fast and if you’re not one of the first vultures to show up… you’ll be going hungry.

Introducing the Eddie-O-Matic, a machine which immediately notifies the Moonshot team of free food and its location.

Behind the Scenes

A webhook established via acts as a listener for any incoming e-mails from Eddie. When one is received, will notify a custom web application with the message details. The application checks to see if two parameters are met:

1. Does the e-mail contain words that look like food - donuts, bagels, Dean and Deluca!?!
2. Does the e-mail contain a location where we might find the food - a floor number or something similar?

If both of the conditions are met, the web app will then notify the Spark API which in-turn immediately messages the Spark core (the micro-controller we’re using for this project). The board then:

* Lights up the appropriate LED which matches the free food item.
* Rotates a servo so the arm points to the correct floor.
* Rotates another servo and “DINGs!” the bell (Eddie’s signature sound).


The Spark core is a really great controller to get ideas going fast. The hardware is easy to work with and the board provides wifi connection out of the box. The REST API is a huge convenience and is reliable and easy to use. If you’re willing to write a web app, you can make fairly complex mashups quickly.

- Joe Longstreet (@joelongstreet)

| Comments

We took sixteen Barkley partners through our Crash Course in Design Thinking today in the lab, based on the Stanford D School’s Virtual Crash Course.  Everyone had a great time learning and doing Design Thinking and making things for one another. Big lesson learned for the lab team: make sure you have plenty of pipe cleaners, Duct Tape and construction paper.

| Comments

Chris Leon Rocks Pedalboard.js

Creative Tech takes many forms.

Vote for Moonshot!

| Comments

Moonshot is competing in KCSourceLink’s KC Battle of the Brands. Help us survive and advance by voting for us in the Innovation Bracket.

Vote here!

Showrooming in the world of iBeacons

| Comments

There’s a lot of talk about how iBeacons will revolutionize indoor location – how they’ll allow for incredibly precise context-aware content and notifications inside your apps.

But one point we’ve not seen many talking about is their effect on showrooming.

Showrooming, according to Wikipedia, is “the practice of examining merchandise in a traditional brick and mortar retail store without purchasing it, but then shopping online to find a lower price for the same item.”

The Amazon app is the perfect example of this. Kick open the app in store, scan a barcode, see if the product is cheaper on Amazon, buy it on your phone, then have it on your doorstep in two days.

Showrooming is making it difficult for brick and mortar retailers to compete with online merchants, especially considering the tax advantages to selling online.

But, imagine how the problem could be exasperated in an iBeacon-saturated world, where every aisle is broadcasting its identity to your apps.

What if Amazon rolled out a small update tonight that added a simple unnoticeable feature to their existing app? Imagine it just did one thing: every time someone scans a barcode with the Amazon app, the app records the list of iBeacons that are broadcasting in the area and reports home to Amazon for data collection.

Imagine the massive database of information Amazon could compile that ties specific product UPCs to individual aisles in retail stores. What would the repercussion be?

Well, imagine that Amazon makes another update to their app in a year or so once iBeacons become more common.

Now, whenever you stand in an iBeacon-enabled store aisle, the Amazon app automatically displays a list of products in your area with their prices on Amazon, enticing you to buy. You don’t even need to scan a barcode. It conveniently knows what’s around you, and possibly even price-matches products based on what previous customers may have reported in that aisle.

Furthermore, what if Amazon could upsell you on a product with a push notification? They already know what your interests are and have perfected the art of recommending products to you, so what if you got a push notification walking past the electronics aisle for that digital camera you’ve been considering. “Canon EOS Rebel T3i nearby if you’d like to try it in person.” Amazon could tell you to go look at it, to try it out, then buy it online, cheaper, with a tap in their application.

iBeacons are an incredible idea, but they carry with them an uncomfortable truth brands will have to face: They can dramatically improve your customer experience, but your brand has no control over how the technology gets used by others.

It’s a possibility that showrooming will only worsen, even when brands suppose they are making it easier for their customers to find and buy the products they want in store.


Moonshot is Hiring

| Comments

Moonshot is looking for a few good geeks. We have positions open for a Creative Technologist, an Experience Designer and a Technoculturist. Check ‘em out and apply. Feel free to share widely. - Mark

Where Am I?

| Comments


That’s the question we’ve set out to answer in the soon-to-arrive world of iBeacons. iBeacons give us much more granular information about the location of mobile devices than existing technology, especially in those hard-to-reach-by-GPS places.

So, while we wait for our actual BLE beacons to arrive, we’ve set up our iDevices in the office to function as transmitters and are making attempts to guess a smartphone’s location.

Here is the code (JavaScript for our PhoneGap app) that lets us triangulate a device’s position using its distance from three fixed points, our future beacons.

While we wrote the code for iBeacons, it can be used to triangulate the intersection of three circles for any javascript application, like this:

    var point = triangulate(
        [0, 0], 7,
        [10, 0], 7,
        [5, 10], 5

The three arrays provide the fixed locations of the beacons (x, y), and the radius value represents the device’s distance reading from each. The function returns a point which represents the most accurate of six possible intersection points between the beacon broadcast signals.

We’ll let you know how it goes when our actual beacons arrive!