ATR brainliner logo
view all available projects
  • Login with Google Login or register with your Google account
  • Filter Search Results Find results by data type.
    Show All
    Show ECoG
    Show EEG
    Show fMRI
    Show MEG
    Show Microelectrode
    Show NIRS
    Show Optical Imaging
    Show PET
    Show Other
  • How We License Data Learn about the data licenses used
  • Help Learn how to use brainliner
neurotycho NIF ATR brainliner is a web portal for sharing brain and behavioral data for neuroscience.Comments?© 2015 ATR Department of Neuroinformatics.Sponsored by MEXT SRPBS.

Showing results for showall:

Results 1 - 23 of 23

Results filtered by data type.

  • faaa from czzzzz last updated 3 years ago
    Other

    faaa

  • eeg_analyse from hangzhou dianzi university last updated 4 years ago
    Other

    just a try

  • test from test last updated 4 years ago
    Other

    test

  • Blur Image Sharpening from Kamitani Lab, Kyoto University last updated 5 years ago
    Other

    Original paper: Abdelhack & Kamitani (2017) Top-down sharpening of hierarchical visual feature representations. bioRxiv. https://doi.org/10.1101/230078

  • Neurotycho: Monkey Anesthesia Task from Fujii Lab, RIKEN last updated 5 years ago
    ECoG

    alt text

    Overview

    This is ECoG data from Yanagawa, T., Chao, Z. C., Hasegawa, N., & Fujii, N. (2013). "Large-Scale Information Flow in Conscious and Unconscious States: an ECoG Study in Monkeys." PloS one, 8(11), e80845.
    A monkey who was sitting still with its hard and arms restrained and was then anesthetized so as to lose consciousness.


    Task

    One monkey was sitting calm with head and arm restrained. ECoG data were recoreded first with alert and later with anesthetic condition.


    Data

    128-channel ECoG data were sampled at 1KHz.

    Data are originally from http://neurotycho.org/anesthesia-task.

  • Speech Imagery Dataset from CREST last updated 5 years ago
    EEG Other
    Overview

    EEG data during vowel speech imagery for 3 subjects;Sample dataset from



    Charles S. DaSalla, Hiroyuki Kambara, Makoto Sato, Yasuharu Koike, Single-trial classification of vowel speech imagery using common spatial patterns, Neural Networks, Volume 22, Issue 9, Brain-Machine Interface, November 2009, Pages 1334-1339, ISSN 0893-6080, DOI: 10.1016/j.neunet.2009.05.008.


    Task

    Three tasks: speech imagery of /a/ (ai) and /u/ (ui) and rest (re)


    Data
  • Monkey ECoG Visual Objects from Niigata University last updated 5 years ago
    ECoG Other

    alt text

    Overview

    This is ECoG data from Matsuo T, Kawasaki K, Osada T, Sawahata H, Suzuki T, Shibata M, Miyakawa N, Nakahara K, Iijima A, Sato N, Kawai K, Saito N, Hasegawa I (2011): "Intrasulcal electrocorticography in macaque monkeys with minimally invasive neurosurgical protocols" Front. Syst. Neurosci., 5, p. 34. In this study, a newly-designed ultra-thin ECoG probe was introduced. The feasibility of the new ECoG protocol was demonstrated by the high signal-to-noise ratio of the signals and low (almost no) physical damage in the implanted areas.


    Task

    A macaque monkey was trained in a visual fixation task to keep the gaze within !^1!k from the fixation target. 24 photographs of objects from a wide variety of categories, including faces, foods, houses, cars, etc. were presented as stimuli. Each stimulus was presented for 300 ms followed by a 600-ms blank interval.


    Data

    128-channel ECoG data from the temporal lobe, recorded at 1000 Hz are included.

  • Generic Object Decoding from Kamitani Lab, ATR last updated 5 years ago
    Other

    Original paper: Horikawa, Kamitani "Generic decoding of seen and imagined objects using hierarchical visual features"


    Demo code: GitHub/KamitaniLab/GenericObjectDecoding

  • Position Decoding from Kamitani Lab, ATR last updated 6 years ago
    Other

    alt text

    Overview

    This is fMRI data from Majima, K et al. (2017) Position Information Encoded by Population Activity in Hierarchical Visual Areas


    Demo code: GitHub/KamitaniLab/PositionDecoding

  • Visual Image Reconstruction from Kamitani Lab, ATR last updated 6 years ago
    Other

    alt text

    Overview

    This is fMRI data from Miyawaki Y et al. (2008): Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron. Dec 10;60(5):915-29. In this study a custom algorithm, sparse multinomial logistic regression was used to train decoders at multiple spatial scales and combine them based on a linear model, in order to reconstruct presented stimuli.


    Task

    The experiment consisted of a human subject viewing contrast-based images of 10x10 flickering image patches. There were two types of image viewing tasks: 1. geometric and alphabet viewing and 2. random image viewing. For image presentation, a block design was used with rest periods between the presentation of each image. For viewing 10x10 image patches defining common geometric shapes or alphabet letters, image presentation lasted 12 s, followed by a 12 s rest. For random 10x10 image patch presentation, image presentation lasted 6 s, followed by a 6s rest.


    Data

    Voxels from V1, V2, V3, V4, and VP cortex are shared in this data. All data are pre-processed and ready to use for machine learning.

  • Rat Eye Stimulation from Niigata University last updated 6 years ago
    ECoG Other

    alt text

    Overview

    This is ECoG data from Toda H, Suzuki T, Sawahata H, Majima K, Kamitani Y, Hasegawa I (2010): "Simultaneous recording of ECoG and intracortical neuronal activity using a flexible multichannel electrode-mesh in visual cortex." Neuroimage Jan 1;54(1):203-12.


    Task

    Three rats had their eyes stimulated by a visual grating, with four task conditions: 1) left eye only, 2) right eye only, 3) both eyes stimulated, 4) neither eyes stimulated.


    Data

    32-channel ECoG data from 3 rats, recorded at 1000 Hz are included.

  • Neurotycho: Food Tracking Task (Subdural) from Fujii Lab, RIKEN last updated 6 years ago
    ECoG Other

    alt text

    Overview

    This is ECoG data from Shimoda K, Nagasaka Y, Chao ZC, Fujii N (2012): "Decoding continuous three-dimensional hand trajectories from epidural electrocorticographic signals in Japanese macaques." J. Neural Eng. 2012 9:036015.


    Task

    The monkey was tracking food rewards with the hand contralateral to the implant side. Each monkey was trained to reach for food offered by the experimenter at irregular intervals.



    Data

    ECoG and motion data were sampled at 1KHz and 120Hz, respectively, with time stamps synchronized. Monkey B and Monkey C had 64-channel ECoG electrodes.


    Data are originally from http://neurotycho.org/epidural-ecog-food-tracking-task.

  • Human Dream Decoding from Kamitani Lab, ATR last updated 6 years ago
    fMRI

    alt text

    Overview

    This is fMRI data from Horikawa, et al. (2013): Neural Decoding of Visual Imagery During Sleep. Science. May;340(6132):639-642. In this study, three human subjects slept in an fMRI scanner, giving verbal reports on their dreams when awakened. Contents occuring in dreams were then predicted using BOLD signal decoding analysis.


    Task

    Three human subjects slept in an fMRI scanner. When a specific EEG pattern associated with dreaming was viewed, subjects were awakened and gave a verbal report describing the contents of the dreams they had. Dream contents were then matched to synsets in WordNet, and brain activity were labeled with the synsets. Training and test data were then divided and decoding analysis was performed to predict the contents of dreams (synsets) associated with brain activity.


    Data

    Voxels from the whole brain, with masks for V1, V2, V3, lateral occipital complex (LOC), fusiform face area (FFA), parahippocampal place area (PPA), lower visual cortex (LVC), and higher visual cortex (HVC) regions-of-interests are shared in this data. All data are pre-processed and ready to use for machine learning.

  • Decoding Subjective Contents from the Brain from Kamitani Lab, ATR last updated 6 years ago
    fMRI Other

    alt text

    Overview

    This is fMRI data from Kamitani Y and Tong F (2005): Decoding the visual and subjective contents of the human brain. Nature 8, pp 679-685. This paper explored the ability to decode the perception of edge orientation in four human subjects using an orientation decoder.


    The figure above shows: (a) the orientation decoder that takes as input data from fmri voxels, and outputs a predicted orientation for a stimulus, and (b) shows the orientation selectivity of individual voxels and detectors.


    Task

    Four human subjects viewed contrast-based oriented image gratings with 8 possible grating orientations. Each subject performed 20-24 trials, for 20-24 trials per grating orientation.


    Data

    Voxels from V1, V2, V3, V4, and VP cortex are shared in this data.

  • dsf from fdsfs last updated 7 years ago
    Other

    sdfs

  • Hand shape decoding (Rock, Papers, Scissors) from Kamitani Lab, ATR last updated 7 years ago
    fMRI Other
    Overview

    This is fMRI data for sample programs of BDTB.


    Task

    A human subject changed shape of his right hand (rock-, paper- and scissor-shape ) in an fMRI scanner.


    Data

    Voxel from the whole brain with masks for Cerebellum (CB), Primary Motor cortex (M1), and Supplementary Motor area (SMA) are shared in this data.

  • ddd from d last updated 7 years ago
    Other

    ed

  • Neurotycho: Food Tracking Task (Epidural) from Fujii Lab, RIKEN last updated 7 years ago
    ECoG Other

    alt text

    Overview

    This is ECoG data from Chao ZC, Nagasaka Y, Fujii N (2010): "Long-term asynchronous decoding of arm motion using electrocorticographic signals in monkeys." Frontiers in Neuroengineering 3:3.


    Task

    The monkey was tracking food rewards with the hand contralateral to the implant side. Each monkey was trained to reach for food offered by the experimenter at irregular intervals.



    Data

    ECoG and motion data were sampled at 1KHz and 120Hz, respectively, with time stamps synchronized. Monkey A had 32-channel electrodes implanted, whereas Monkey K had 64-channel ECoG electrodes.


    Data are originally from http://neurotycho.org/food-tracking-task.

  • Neurotycho: Anesthesia and Sleep Task from Fujii Lab, RIKEN last updated 7 years ago
    ECoG Other

    alt text

    Overview

    This is ECoG data from Yanagawa, T., Chao, Z. C., Hasegawa, N., & Fujii, N. (2013). "Large-Scale Information Flow in Conscious and Unconscious States: an ECoG Study in Monkeys." PloS one, 8(11), e80845.


    Task

    One monkey was sitting calm with head and arm restrained. ECoG data were recoreded during awake (eyes-opened, eyes-closed), anesthetic and sleeping conditions.


    Data

    128-channel ECoG data were sampled at 1KHz.

    Data are originally from http://neurotycho.org/anesthesia-and-sleep-task.

  • Neurotycho: Emotional Movie Task from Fujii Lab, RIKEN last updated 7 years ago
    ECoG Other

    alt text

    Overview


    Task

    The monkey presented with six different movie clips when sitting with the head fixed. No food reward was presented. While the monkey was performing the task, ECoG data and eye-tracking data were recorded simultaneously. There was no motion tracking.
    ECoG and eye-tracking data were sampled at 1KHz and 120Hz, respectively, with starting and stopping points synchronized.


    Data

    ECoG signal (μV) recorded from electrodeN (1‐128), stimulus (movie) id, and eyetrack data are shared in this data.



    Data are originally from
    http://neurotycho.org/emotional-movie-task

  • Neurotycho: Social Competition Task from Fujii Lab, RIKEN last updated 7 years ago
    ECoG Other

    alt text

    Overview

    ECoG data were obtained from one monkey under different hierarchycal conditions by pairing with multiple monkeys.

    See http://neurotycho.org/social-competition-task


    Task

    There were two monkeys sitting around a table. ECoG data and eye tracking
    data were recorded from one monkey. Motion data were captured from both monkeys
    and experimenter. For each trial, one food reward was placed on the table.
    If one monkey was dominant to the other, he could take the reward withouthesitation.
    But, if he was submissive, he could not take the reward because of social suppression.
    ECoG data were obtained from one monkey under different hierarchycal conditions by
    pairing with multiple monkeys.
    ECoG data were sampled at 1KHz. Motion data and eye tracking data were sampled at 120Hz.
    Start and stop point of all data were synchronized.


    Data

    ECoG data and eye tracking data from one monkey, and motion data from two monkeys and experimenter are shared in this data.

    Data are originally from http://neurotycho.org/social-competition-task.

  • Neurotycho: Monkey Sleep Task from Fujii Lab, RIKEN last updated 7 years ago
    ECoG Other

    alt text

    Overview

    These are ECoG data from a monkey who was sitting still with its hard and arms restrained and was then anesthetized so as to lose consciousness.


    Task

    One monkey was sitting calm with its head and arms restrained. ECoG data were recorded first in an alert condition, then anesthesia was injected, and the anesthetized condition began.


    Data

    128-channel ECoG data were sampled at 1KHz.

    Data are originally from http://neurotycho.org/sleep-task.

  • Neurotycho: Visual Grating Task from Fujii Lab, RIKEN last updated 8 years ago
    ECoG Other

    alt text

    Overview

    These are ECoG data from a visual grating viewing task, where a monkey viewed a screen that alternated between a visual grating of 8 different directions and a blank screen every 2 seconds.


    Task

    A monkey viewed a screen that alternated between a visual grating selected randomly from 8 different directions (45, 90, 135, 180, 225, 270, 315, 360 degrees) and a blank screen every 2 seconds.


    Monkey was sitting with head fixed. His arm motion was also restrained. ECoG data and eye position were recorded. There was a monitor in front of the monkey. Grating pattern that moves in eight direction was presented on the screen. There was no fixation required. Blank and stimulus pattern were switched alternatively every 2 sec. Stimulus events can be decoded from 129th channel by analog value. ECoG data were sampled at 1KHz. One cycle of sinusoid pattern was 27mm with speed at 108mm/sec (4Hz). Distance between monkey and screen was 490mm.


    Data

    128 ECoG channels were recorded at 1000Hz and the positions of the electrodes are shown in K2.png.


    Data are originally from http://neurotycho.org/visual-grating-task.