Researchers at Princeton University have published a study exploring the possibility of translating brain images into text. They have apparently been taking pictures of brains and reading them like wee novellas. I’ll give you a moment to consider that.

Okay, did you freak out a little bit? I did! I was torn between dancing a jig of joy or a jig of fear. Let’s find out more!

From analysis of functional magnetic resonance imaging (fMRI) performed while study participants read the names of 60 objects, researchers were able to generate lists of words relating to each object. The top ten words from these scans were compared with Wikipedia articles about each object, and the articles were consistently found to be chockablock with those words. (Holy crap! Feel free to do a jig of wonder at this point.)

For example, the fMRI results from a participant reading the word dress were found to contain the words wearwomanclothecenturydresstypeformfashionstyle, and design. We don’t really need to consult Wikipedia to know the brain scan translation was right on the money there. Other results showed that the system may need some honing: images taken while subjects read the word bell were interpreted as: producewinecontainstatetimeworldcommontypeprocess, and century. Overall, though, the words produced were impressively on-target. You can read and marvel at the full list here.

Researchers Francisco Pereira, Matthew Botvinick, and the yummy Greg Detre believe that this technique could be used in the development of computer-human interfaces, which seems like tragically small thinking for such smart guys. (Sorry Greg! Smooches!) The most obvious application is actually interrogation.
Figure from Pereira et al.'s jaw-dropping pre-print article, showing how brain
scans were converted into text relating to the concept "table." Whoa.