AMRI Bioprinting: Integrating Makers, and Scientists

Jordan Miller, Assistant Professor of Bioengineering at Rice University, shares how the Advanced Manufacturing Research Institute (AMRI) is providing a scientific framework for a bioprinting partnership by collaborating with the 3D printing, DIY bio, and scientific communities by repurposing both commercial and open source hardware.

AMRI is using engineering principles to understand more about biology by exploring how to print and cast tissues and blood vessel networks using extruded sugars and gels. Miller states that, science is about being open and scientists are supposed to be able to reproduce experiments. This fits well with the maker community who are already directly invested sharing and gaining knowledge.

However, Miller says, “scientists didn’t get it”, until AMRI formalized the interaction.

“This was what the power of the maker community allowed us to do in science and we couldn’t do working with a commercial company.  Commercial companies wouldn’t give us the schematics of their machine to be able to redesign it, rip out the internals and put a sugar extruder on the machine instead.”


Inspired by the format of Google’s “Summer of Code” project, AMRI brings fellows to come in and work on focused research projects in a scientific framework. By creating a structure for talented makers from around the world who have all the “key ingredients” to be scientists, AMRI is able to focus on targeted projects for improving human health. With a focus on education and improving the intellectual framework of the fellows themselves, this year’s AMRI fellows attacked the printing and casting of tissues / vascular networks from three different angles.


Anderson Ta, Digital Fabrication Studio Technician at the Maryland Institute College of Art (and also a printer tester for the 2014 Ultimate Guide to 3D printing) repurposed a DLP projector and by changing the throw rate, used a vat-based photolithography process to micro cure gels used for tissue casting. Cells can then be embedded into the 3D printed gel during the polymerization process.

Steve Kelly – An undergrad at Worcester Polytechnic Institute in mathematics, modified an inkjet printer to extrude living bacteria. Inspired by the work done by the DIY bio group Biocurious, who modified a CD tray with an inkjet head for printing bacteria, Kelly decided to improve the process. He used a thermal inkjet head to implement the printing of bacteria into very small droplets, the width of a human hair.

Andreas Bastian, formerly at the MakerBot R&D lab and heavily involved in the e-NABLE 3D printed prosthetics project, was interested in modifying and applying wax laser sintering processes to sugar. He took a commercial laser cutter and modified it to sinter sugar, creating his own Z axis to fit inside the cutter, creating “sugar glass” for tissue engineering investigations.

Interested in participating as one of next year’s AMRI fellows?  Keep an eye on the AMRI site for the next open call!

Monitor the UV Index With UVindeSir

I love the idea of modular sensors that can be plugged into a smartphone. Case in point, the UVindeSir, a tiny UV sensor that can be plugged into a smartphone’s headphone jack and returns the UV Index, temperature, as well as a SPF suggestion, and will remind you to put on some sun block.

The project is being crowdfunded in a Indiegogo campaign and is supported by UFactory, an OSHW startup in Shenzhen, China.

Use an Oscilloscope to See Your Pulse

IMG_8124-Edit-Edit ir-pulse-sensor-traces-horizontal


MAKE reader Scott recently made a modified version of the IR Pulse Sensor, combining the design by Sean Ragan with the “simple and sensitive” (but also “slightly more complicated”) circuit originally crafted by MarkusB from Let’s Make Robots. He added a few other components along the way, notably the trimpot and status LED seen on the topside of the perfboard:


Ultimately this was done so Scott could tinker around with his recently acquired oscilloscope! You can read about his IR Pulse Sensor build and many more of Scott’s wonderful electronics projects over on his blog, lungstruck.

Now imagine controlling a beetle with your pulse!

[Hat tip Em!]

How Gesture Control Will Become a Reality in Healthcare


With the popularity of gesture controlled devices like the Nintendo Wii, Microsoft Kinect, and Playstation Move showing no signs of slowing down, it’s clear that individuals playing games want to be more physically involved and interact in ways like never before. This increase in physical activity has helped to break the stigma of some games turning us into “couch potatoes,” but is there more of a therapeutic benefit to this type of interaction? I believe there is.

Many assisted-living homes across North America have already adopted programs which utilize the Nintendo Wii to increase physical activity amongst the seniors’ population. This has not only shown improved vitality to those involved, but also their quality of life radically improves in most cases. As a person with a physical disability myself (Cerebral Palsy or CP) and a former game industry employee, video games have always intrigued me, namely for two reasons. The first is because I could have powers and abilities far beyond my own. The second is because I could use my existing abilities, and in my case, increase my fine motor functions through repetitive actions. This is something that both my physiotherapist and occupational therapist had taught me over the years. The more I train myself to do daily processes like tie my shoes or do up buttons, the easier it would be, and yet, it was still never as fun as when I was engaging similar muscle groups in a gaming environment.

Throughout my time in the game industry, I often wondered why software developers didn’t capitalize on the therapeutic benefits of games, but if I could just show them, perhaps they would see things in a new light? By a chance encounter in August of 2012, I was able to do just that. That August, I came across Reality Controls, a software development company based locally out of Vancouver, Canada. What started out as a simple review of their software for (a charity which has game reviews for persons with a disability) turned into so much more. I recognized that the software they were developing could be used in therapy and rehabilitation, and as a result, I was brought on as their Director of Communications to help realize this vision.

IMG_125252315530777Marco Pasqua (left) and Reality Controls CEO Sean Sibbet giving their TEDx talk.

One of our projects is called control:mapper. Control:mapper enables a person with limited mobility to create custom templates for mapping their natural body motions and verbal input to any arrangement of keyboard and mouse commands needed for their favorite Windows software. This software would serve as the perfect framework to cater to individuals with a plethora of varying abilities. One such individual was Darrell Wyatt. Like me, Darrell has CP, and after sitting down for only a few minutes with control:mapper to control a game, he had this to say [video]:

It’s more interactive … actually being able to do the movements and make your characters move and do whatever is involved in the game, at least for me, would give me a whole different feel like I’m actually a part of the game. It’s not just a game; you actually have to do something. This would be a way to get exercise that I don’t normally get.

This proved that we could truly impact a lot of people, but now, how were we going to reach them all? In order to do that, we looked into the area of telerehabilitation. We’ve made big strides in this arena, and our work has helped to enhance accessibility and rehabilitation to make a mark on Vancouver’s tech scene.

At the 2013 Vancouver Mini Maker Faire, we illustrated how this is possible, with two networked Kinect workstations enabling individuals to interact in an immersive environment. This is a simple example to show how a client and their therapist interact, using virtual avatars to track the client’s progress, regardless of where they were physically located. Not only would this be ideal for individuals in rural communities, it will support ongoing care for clients who have undergone a stroke or spinal cord injury, without the need to physically travel to a facility for every rehabilitation session.

IMG_125549174883418Marco and Sean at the Reality Controls booth at Vancouver Mini Maker Faire.

We’ve had the opportunity to work with the University of British Columbia (UBC) on a project called “FEATHERS” (Functional Engagement in Assisted Therapy through Exercise Robotics). The aim of the FEATHERS project is to help motivate children/adolescents and adults who’ve had a stroke (or other upper extremity limitations such as Cerebral Palsy) to continue with an exercise program by using social media and online games and robotic interfaces. Having personally gone through 18 years of physical therapy, I know how beneficial it would have been to have a fun and engaging application, which enhanced my therapy sessions. At the same time, my physiotherapist would have loved the tangible results, to track my progress.

This is only the beginning for where we envision Reality Controls is going. We imagine a world where clients don’t have to go into their therapists’ office each visit to get their progress results, but instead, can do their assigned program from the comfort of their own home and have their practitioner monitor these results remotely. Furthermore, with the Kinect 2.0 for Windows coming in 2014 and rumored to be able to pick up not only macro-motions (e.g. arm and leg joints) but also micro-motions (e.g. finger and eye movements) along with voice-control, we’ll be able to reach and support more individuals regardless of their abilities.

By building strong relationships with some of today’s leading educators and healthcare providers, we feel it is only a matter of time before this is not just a possibility, but a reality.

Now Serving: A $330,000 lab-Grown Burger

Screen Shot 2013-08-05 at 4.11.17 PM

The taste of tomorrow?

What is being billed as the world’s first (and most expensive) cultured hamburger patty debuted in London today, NPR reports. And the project’s anonymous funder was unveiled, too. It’s Google’s Sergei Brin.

The unveiling of “cultured beef,” as the burger is branded, was a production worthy of the Food Network era, complete with chatty host, live-streamed video, hand-picked taste testers, a top London chef and an eager audience (made up mostly of journalists). Rarely has a single food gotten such star treatment. But this was no ordinary food launch, of course. The burger, which began as just a few stem cells extracted from a cow’s shoulder, represents a technology potentially so disruptive that it has attracted the support of Google co-founder Sergei Brin. “Sometimes a new technology comes along and it has the capability to transform how we view the world,” Brin says in a promotional video released Monday, the same day he was unmasked as the anonymous donor who ponied up money to grow the burger.

Cultured Beef is not the first company to culture meat. Missouri’s Modern Meadow (which is backed by PayPal’s Peter Thiel) is working on lab-grown meat and leather and already produced a tiny pork chop. Check out my interview with CEO Andras Forgacs on Google+ here.

Assuming the price came down, would you eat a lab-grown burger?  Is it more or less off-putting than the current state of industrially raised beef?

Better 3D Imaging Aids Surgery for Deep Brain Stimulation

Last weekend, I attended the Internet Cowboys Un-Conference (ICUC) in Jackson Hole, WY, organized by Yossi Vardi at the ranch of Yuval and Idith Almog. I was really glad to meet a lot of interesting folks in such a spectacular setting. One highlight was a presentation on 3D imaging of the brain by Guillermo Sapiro of Duke University and Noam Harel of the University of Minnesota. New MRI technology using high magnetic fields (7 Tesla) produces a higher-resolution 3D image of the brain that can be used to guide a neurosurgeon who is placing electrodes in the brain of patient with Parkinson’s disease. This technique is called deep brain stimulation (DBS). The presentation offered an amazing insight into the role that technology is playing in science, and helping to restore the life of a person with such a debilitating disease as Parkinson’s.

Above is a video from a Parkinson’s patient who underwent DBS, a treatment that applies electricity to a specific region of the brain. A surgeon implanted electrodes in the patient’s brain. Wires run from the electrodes to a controller placed inside patient’s chest, very similar to a heart pacemaker. This surgery takes 5-6 hours during which the patient is conscious. Afterwards, as demonstrated in the video, the patient is free from his Parkinson’s symptoms with full control where he can turn on and off the stimulation, as well as control the level, to eliminate the tremors caused by Parkinson’s.

New higher resolution MRI can be used to build a 3D model of a particular patient’s brain, so that the surgeon can know with greater precision where to place the electrodes during surgery, which is what takes up most of the time during the operation. Moving the electrodes a millimeter or two can result in either less effectiveness or unexpected side effects. The difficulty is finding exactly the right place.

A 3-dimensional model of the mesencephalon, thalamus, and surrounding regions. Volume renderings of the globus pallidus (green), red nucleus (red), subthalamic nucleus (yellow), and substantia nigra (blue) fused with a T2-weighted image.

A 3-dimensional model of the mesencephalon, thalamus, and surrounding regions. Volume renderings of the globus pallidus (green), red nucleus (red),
subthalamic nucleus (yellow), and substantia nigra (blue) fused with a T2-weighted image.

[Image source: “An Assessment of Current Brain Targets for Deep Brain Stimulation Surgery With Susceptibility Weighted Imaging at 7 Tesla”, Aviva Abosch et al, Neurosurgery, December 2010.]

The brain model that surgeons typically use is just that — a reference model, based on one patient’s donated brain from many years ago. They would try to locate electrodes based on a general idea of where the target regions are located. The new technology allows the surgeon to study the brain’s structure for this individual’s brain; it is a highly specific map of the patient’s brain rather than a reference map. Eventually this kind of MRI will be done in real-time during surgery. Now it’s done in advance of surgery, but it offers a better guide for the surgeon who does use lower-resolution MRI during the operation to follow the electrodes moving through the brain.

Professor Sapiro and Dr. Harel, who have been developing the 3D imaging techniques, say that scientists don’t really know why DBS works, but that it is proven to work. I found that really interesting.

When we think of applying electricity to the brain, we might think about electroshock, a brute-force method which had mostly negative results. DBS is much different. DBS is applying a current to a specific location in the brain that has been mapped with great precision. This approach has had profoundly positive effects for Parkinson’s patients. However, only an estimated 15 percent of Parkinson’s patients are electing this surgery, perhaps because they fear the lengthy surgical procedure. There are experiments applying DBS for those suffering chronic depression and memory loss because of Alzheimers.

Dr. Harel said to me that new techniques and new technology are emerging that are “in search of new applications.” It’s an exciting time, creating new opportunities for innovation. We see a lot of stories about fairly trivial uses of technology and innovation that is rather mundane. This is a story with elements that are familiar to most makers such as 3D scanning and modeling and electric circuits, but it is a medical story about how technology changes people’s lives. “You should see the patient’s face during/after surgery when they understand that they are free from tremors,” Dr. Harel wrote in an email. “It’s priceless! It’s a great feeling knowing that you are part of this amazing procedure.”

How-To: 3D Print a Model of your Brain

woVYRRjh Reddit user intirb recently posted a detailed tutorial on how to 3D print a model of your own brain using an MRI scan. If you haven’t had your head checked lately (I should hope you haven’t had to), intirb suggests inquiring in local university neuroscience departments to see if you can participate in a clinical trial in exchange for the MRI scans.

Once you have the file of your grey and white matter cut into thin slices, you can import it into a program called FreeSurfer. It’s a highly specialized piece of software, and familiarity with Linux is recommended to use it, although there are plenty of online resources to help you out. T4WYXDoh

FreeSurfer will generally take 1-2 days for an average desktop computer to process the MRI data and convert it into an STL file. The resulting file is so complex that it needs to be brought into MeshLab for simplification. Most programs that handle STL files cannot work with objects having more than 20,000 faces. Meshlab is used to reduce the file to below that threshold.

Once this process was complete, intirb had success with dumping the STL straight into his MakerBot. Using this method, you can have a model of your own brain after just two to three hours of printing.

3D-Printing Mechanical Hands

This is really cool, a MakerBot Industries-supported 3D printable prosthetic hand project.

When Richard Van As, a master carpenter in Johannesburg, South Africa, decided to make a set of mechanical fingers, it wasn’t just for fun. He’d lost four of the fingers on his right hand in an unfortunate work accident. For a tradesman like Rich, having a disabled hand is a big professional detriment, so Richard decided on the day of his the incident that he would use the tools available to him to remedy his situation. Watch the inspiring video above to hear how Richard’s project, Robohand, is changing lives with patience, spirit, and a MakerBot Replicator 2.

You can check out the project’s current designs on Thingiverse. [via RasterWeb]

DIY Surgery

File this one under DIY medical care.

Whether you lack medical insurance, spend time out doors far from medical care, or don’t want to fork over cash for minor medical procedures, it makes sense to learn how to care for yourself and save a trip to the doctor. Over on the Resilient Communities website, a reader submitted a video of his DIY medical tip: using Super Glue to close a minor head wound instead of going to the ER for stitches. Screen Shot 2013-04-11 at 11.12.03 AM Warning: The video is a bit gory and MAKE doesn’t endorse such a procedure. But it does raise some interesting questions. Here’s what the contributor to Resilient Communities wrote with his video submission:  

The video is meant to be funny too, but I think it also illustrates one area where most people are completely dependent on the system-health care.  Most people like me grew up with health insurance from their parents that covered the whole family for pretty much anything. Cold? Go to the doctor. Hurt playing sports? Go to the doctor. Need stitches? Definitely go to the doctor. The past few years of living un-insured or marginally insured has taught me just how much we can manage on our own when we don’t have much other choice. Showing the video to friends and family has gotten extreme and mixed reactions. Dad (who grew up really rural) thought it was great, resourceful; others thought it was crazy to do anything but spend the $1000+. Strikes me that it might be a bit taboo (for some people) to even suggest they take any aspect of their medical care into their own hands.

 I don’t know about you, but I’m going to start carrying Super Glue on my backcountry trips. What do you think?  

Today on Food Makers: 3D Printed Food


3D printing turkey at Cornell University.

Today on Food Makers, a Google+ hangout on air at 2pm PST/5Pm EST, I’ll be exploring the how and why of 3D printed food with three luminaries in the field: avant garde chef Homaro Cantu of Moto restaurant in Chicago, Jeffrey Lipton from Cornell University’s Fab@Home, and Andracs Forgacs of Modern Meadow, a biotech firm developing the technology to print raw meat grown from animal cells–petri dish meat if you will.

Is 3D printed food the future? Would anyone want to eat it if was? Tune in right here to find out. If you can’t make it to the live broadcast, check out the archived video on our YouTube page at here.