Category Archives: FUTURETECH

The DNA Thief: NYC Artist Creates Replica Face Sculptures From Your Discarded Gum

Careful where you leave your DNA, folks; an artist in New York named Heather Dewey-Hagborg (well, that’s unfortunate) has been creating 3D printed sculptures/busts of people’s faces built upon the DNA she has been able to scoop from chewing gum, cigarette butts, and strands of hair.


“Whatever you do, don’t look up its nose.”

Dewey-Hagborg finds a sample, extracts the DNA from it using a “DNA Investigator Kit” that she says is readily available from Qiagen, an online healthcare/lab materials website, and then puts the sample through some rigorous analysis to determine the genetic make-up and appearance of her subject.

Her exhibit, titled Stranger Visions, is a collection of 3D printed faces of the subjects she has chosen and been able to analyze, and each one, interestingly, comes with additional (seemingly irrelevant, but otherwise interesting) details such as the wetness of the person’s earwax, their resistance to Malaria, and the likelihood that each person will become freckly or go bald.

While each piece is undoubtedly creepy, the overall concept is an intriguing one, and her work is groundbreaking and a sign of things to come in the art world.

As for me, I will now be leaving the house each day in a hazmat suit.


“The wetness of my earwax is MY business!”

Tagged , , , , , , , , ,

FUTURETECH: Brain-to-Brain Communication

While almost every edition of FUTURETECH should have the tagline “prepare to have your mind blown,” this one might take the figurative cake so far.  Scientists at Duke University, who are probably not just scientists but also all-star basketball players, have developed something truly amazing: a brain-to-brain interface with which to communicate.  Thus far the experiment has only been tested on lab rats, but has yielded results that are truly astonishing as well as promising for the future of humanity.  Broad statement, I know, but read on.

The experiment is set up as follows:

  1. Scientists train one group of rats, dubbed the encoders, to do a series of rather difficult tasks (for a rat), such as putting their nose through a hole to receive water, pulling or pushing a small lever, or reacting in some way to a light stimulus.
  2. Connect the encoders brains to a second group of rat-brains via a set of transmitters and receivers.  This second group of rats, who have not been trained at all with relation to the experiment at hand, is called the decoders.
  3. Have the decoders separated from the encoders but in an identical scenario, and while hooked up via the brain-to-brain interface.  Allow the encoder rats to do their assigned task for a small food reward, and then analyze the decoder rats ability to mimic the action while completely isolated.  Importantly, this stage can be done with any amount of distance between the rats – even globally.

This can be seen in the following video:

Now, if this isn’t yet mind-blowing enough for you, in circumstances where the decoder rats were not accurately perceiving the pattern of moves needed to obtain the reward, the encoder rats would perform the task against, except more accurately.  The University’s scientists have hypothesized the potential ramifications of developing this technology further to include the rehabilitation of stroke and Parkinson’s patients and, eventually, creating something as complex as human brain nets, a popular notion in science-fiction whereby a group of people (or entire race as Star Trek’s Borg).  This means, essentially, making our brains 4G or LTE compatible, and being able to communicate in-real-time with each other, subconsciously, from across the globe.  Remind you of anything?



Tagged , , , , , , , , , , , ,

FUTURETECH: Cinema That Adapts to its Audience

Screen Shot 2013-02-19 at 7.23.02 PM

In news that is damn near straight out of science fiction, writer/director/composer Alexis Kirke is producing a film called Many Worlds that interprets biosensor data collected from the film’s active audience and then changes on-the-fly to reflect their state.  The film, inspired by Schrodinger’s “quantum suicide” experiment, follows friends Charlie and Olivia on a visit to their friend Connie’s house for her 19th birthday – and this is where things get strange.

Upon arrival at the house the two find, instead of Connie, a sealed, coffin-like box in her bedroom;  the box, in turn, is inferred to contain Connie herself along with a Geiger counter (used to measure radioactivity) which is ultimately connected to a cyanide gas emitter.  What is unknown, however, is whether or not the burst has already occurred, and thus whether Connie, should she actually be sealed inside, is now in full blown corpse-mode, or remains alive.

The story’s outcome, though, is determined subconsciously by a number of audience members whose biometric data (such as heart rate, muscle tension etc.) is being measured and fed in real time to a computer which then changes the progression of the film as if it were a steaming locomotive switching tracks.  The end result is a 15-minute film that can end one of four ways, and that has several branching points leading up to the denouement.  How well it actually comes together remains to be seen, but the thought process and tech behind this is rather astounding, even if the general concept is not a new one.  Kirke himself notes that the hardest part of the process was coming up “with four different endings that wouldn’t embarrass (him).”

Check out the video link below for a more detailed explanation of the whole concept!

Tagged , , , , , , , , , ,