Japanese researchers decode human dreams

For several years researchers have been trying to apply the tools of science to get inside our heads. It is a noble effort that, when finally achieved, will represent a huge triumph of mankind over nature. Researchers at the ATR Computational Neuroscience Laboratories in Kyoto have developed some powerful computational tools which use blood flow data from MRI scans to approximately visualize what a person is experiencing in dreams. Their results were published in Science, along with considerable fanfare. Before studies like this can be taken at face value, though, a closer inspection of the actual methods and results is warranted.

In a small animal like a zebrafish, it is possible to look at a movie of its brain activity in real time, and get a fairly clear picture of what the animals is looking at. For the human, we have no such convenient tool as whole brain calcium imaging, but the experimentalist has one thing going for them over the zebrafish — they can simply ask the subject about what they are experiencing.

The Japanese researchers began with early-stage dream sleep reports from their three subjects and used a lexical database called WordNet to extract common key words that appeared in the descriptions. They then scanned the subjects while they were viewing images selected from another database called ImageNet, which had been matched to the keywords. They could then compare the results of these image set scans, with previous sleep scans used to generate the verbally reported keywords.

The full glamour of trying to decode true REM stage dreams is just not experimentally practical — it usually takes a few hours just to reach REM from normal sleep. Lying still inside an MRI tube until you start dreaming – several times a day for 10 days — would be too difficult. For the present study, the researchers could guess pretty accurately when the subjects first drifted off to sleep by watching simultaneously recorded EEG signals, and promptly roused them for reports of what they were imagining upon entering Neverland. Typically subjects could get pretty groggy in less than ten minutes, and could reliably report their visual imagery when awakened.

While the experimental protocols and scale of computational resource applied here are impressive, a few red flags typically appear in these kinds of studies. MRI scans do not directly access neural activity perse; it is a signal derived from blood flow changes only peripherally linked to activity, and on a much slower timescale than actual neural spiking (or even synapses firing). The idea of jumping to the Full Monty — the biblical Joseph decoding the pharaoh’s dreams — as opposed to further refining the simpler “picture a big red blob” kind of stuff, is also disconcerting. If you can decode a blob at 60% confidence, the next step should be to try and get results a little further out from your statistical margin, instead of going for something more complex, like dreams at that same margin.

dreambrain

In the same vein, the next step should not necessarily be, as the researchers indicate, to try to extend the same results across different subjects. Despite the anatomical similarity, different subjects are different brains, and again, the direction should not be broader scope at the same questionable confidence level, but deeper at higher confidence. Deeper necessarily means using real brain activity and the researchers know it. The methods are smart but the brain signals are the weak link. The final thing to underscore is that the actual decoding methods should be laid out for inspection by the larger interested community. Invariably in these studies, and we have seen a few, the details are not openly presented in the standard publication format.

In 2008, the same team reported preliminary studies from the primary visual cortical areas, the areas which first receive inputs ultimately derived from the retina. The newer studies also include higher areas of the brain which combine refinements of the visual input with concept-level ideations and information from other senses. There is a lot of experimental data to support activity correlations of things like faces, animals, or vehicles for example, within certain regions of the brain, but this is still largely a no-man’s land.

To seriously talk about decoding dreams, or even a simple “guess what I am thinking” kind of thing, it might be prudent to initially stick with the lower visual areas. At the conscious level, sure, we experience whole concepts and images, but at the moment at least, things like moving bars and shapes are more approachable experimentally. In the comments section earlier this week, regarding Obama’s new BRAIN Initiative, this issue was taken up in more detail. The idea of trying to record from low-level areas that are both accessible, and somewhat predictable from an actual visual stimulus presented to the retina was discussed. The so-called optic radiations, reachable from interior ventricle locations, was one such place of opportunity. These fiber tracts contain information heading to the cortex with real-world visual data, but also return information about what the brain wants to hear from that data stream.

We are not trying to discourage these kinds of dream decoding studies; on the contrary we think they are fantastic. The concept of using big data and powerful machine learning-based algorithms has one caveat though. They will scale rapidly for sure, but it is not clear that the mind becomes tractable at a similar scale. Training decoding networks takes time and a lot of data. You can only acquire new trial data from slow MRI at a certain rate,  and the faster brain signals that correlate on shorter timescales are sorely needed. Undoubtedly though, this line of pursuit will eventually have direct application for BCIs, thought-to-text (or speech), and as-yet-unimagined prosthetic wonders. There will be those who worry about misuse and invasion of privacy, and their points will be taken at every stride — but largely, they are those who do not understand the technology.

By John Hewitt | ExtremeTech

Find us here

Get news from the CSGLOBE in your inbox each weekday morning

The views and opinions expressed in this article are those of the authors/source and do not necessarily reflect the position of CSGLOBE or its staff.

Paid content

NASA offering $18,000 for human lab rats to stay in bed

Nasa pays $18,000 to allow human lab rats to stay in bed for 70 days and nights NASA will pay $18,000 to anyone prepared to...

NASA Warp Drive Ship

Engage warp drive! Here's What NASA's 'Faster Than Light' Spaceship Could Look Like Look at the picture above. Nope, it’s not a snapshot of a...

US Navy Says They Will Have Weaponized Lasers In Use By 2020

According to a statement from the US Navy, the agency will soon begin using high-energy laser weapons that can destroy aircraft and small boats. The...

What's New Today

Georgia House Votes To Allow Citizens To Abolish Police Departments In The State

The Georgia House backed an effort on Friday to dissolve the Glynn County Police Department and any...

Leaked CDC document contradicts Pence claim that U.S. coronavirus cases ‘have stabilized’

Even as Vice President Mike Pence wrote in a Wall Street Journal op-ed published Tuesday that coronavirus...

Five bombshells about Trump from Bolton ‘s book

Excerpts from former national security adviser John Bolton ’s book about his time in the Trump administration...

Don’t Listen to Fox. Here’s What’s Really Going On in Seattle’s Protest Zone.

It seems I live in a city undergoing a “totalitarian takeover” that will lead to “fascist outcomes”...

MOST READ

Out of the Blue

OUT OF THE BLUE is widely considered one of the best documentary films ever made about UFOs and was directed by celebrated filmmakers James...

Ancient Paintings of UFO’s from 200 to 15.000 years ago

Ancient Paintings of UFO's Over the past 15,000 years UFOs have been clearly depicted in paintings from all around the world. This video features some...

14 African Countries Who Still Pay Colonial Tax To France

80% of the 10 countries with the lowest literacy rates in the WORLD among adults are in francophone Africa. Namely: Benin (40%), Burkina Faso (26%),...

NASA offering $18,000 for human lab rats to stay in bed

Nasa pays $18,000 to allow human lab rats to stay in bed for 70 days and nights NASA will pay $18,000 to anyone prepared to...

Air-powered prototype car

Tata motors has planned to release the 'AIRPOD' air-powered urban vehicle. The three passenger transportation module uses compressed air instead of traditional alternative fuels...

Scientists create world’s lightest solid material

Chinese researchers showed off their graphene aerogel by balancing it on the petals of a cherry blossom The sponge-like matter is made of freeze-dried carbon...

Mystery of Titan’s moving ‘island’

Astronomers at Nasa in California have been left baffled by the appearance of a huge feature in one of Titan's largest seas Giant mass 58...

‘We’ll settle Mars by 2117’: UAE enters race to put humans on Red Planet

The United Arab Emirates has officially entered the space race to Mars, with the oil rich nation launching a new program designed to settle...

Chinese company builds 57-story skyscraper in record 19 days (TIME LAPSE)

Chinese company needs only 19 days to erect 57-story skyscraper A 57-story skyscraper built from prefabricated Lego-like blocks was completed at a record speed of...

Kissinger Meets With Trump for Third Time This Year

President Trump met with alleged war criminal Henry Kissinger on Tuesday, the third meeting Trump has had with Kissinger this year. While the media has focused on...

Global Doomsday Clock Is Almost at Midnight: Here’s What It Means

Two years after the Manhattan Project, a group that called themselves the Chicago Atomic Scientists — who were involved in the development of the...