Head-fixed behavior
Mai-Anh Vu, Mark Howe
Abstract
We have developed a new micro-fiber array approach capable of chronically measuring and optogenetically manipulating local dynamics across over 100 targeted locations simultaneously in head-fixed and freely moving mice, enabling investigation of cell-type and neurotransmitter-specific signals over arbitrary 3-D volumes . This protocol includes the steps for the setup of and acquisition of behavioral data during reward and salient stimulus experiments in head-fixed mice. Please contact us (mwhowe@bu.edu) if you are interested in using this technique.
Steps
Head-fixed setup
Mice were headfixed over a hollow styrofoam ball treadmill (Smoothfoam, 8in diameter), supported by air in a 3D-printed plastic cradle, which allowed them to run in all directions (Dombeck et al., 2010). The setup is detailed in the substeps below:
To monitor the movement of the ball, two optical mouse sensors (Logitech G203 mice with hard plastic shells removed, mounted on posts) were mounted on posts, one at the back and one 90 degrees off to one side of the ball, with the sensors level with the ball’s “equator”. Pitch and yaw (y- and x-displacement, respectively) were read from the optical mouse sensor at the back, and roll (y-displacement) was read from the optical mouse sensor at the side. The optical mouse sensitivity was set to 400 dpi, polling rate 1kHz.
Each optical mouse was connected to a Raspberry Pi (3B+), running a continuous multi-threaded python program to read in the continuous 1kHz change in x- and y-position (in dots), and output a proportional continuous voltage at 100Hz. The pixels-to-voltage conversion was set such that velocity magnitude of 3.5 m/s74 corresponded to the maximum output voltage of 3.3V.
This velocity magnitude voltage was converted to an analog voltage via a digital-to-analog converter (DAC, MCP4725), and read in through an analog input pin on the NIDAQ board. The sign (direction) of the velocity was sent as a digital binary variable through a separate Raspberry Pi output pin and read in through a separate input pin on the NIDAQ board.
Water reward
For reward experiments, mice were water-schedule: they received 0.8-1.5mL water daily, calibrated so they could maintain a body weight 85-90% of their free-water body weight, as described previously (Howe and Dombeck, 2016).
For unpredicted water reward delivery, a water spout mounted on a post delivered water rewards (9 μL) at random time intervals (randomly drawn from a 5-30s uniform distribution) through a water spout and solenoid valve gated electronically.
Licking was monitored by a capacitive touch circuit connected to the spout.
The noise generated by the air supply under the ball treadmill was measured to be approximately 78-80 dB.
Salient stimuli
Salient stimuli were presented in randomized order and at varying intensities, 14-21 presentations of each modality, with intertrial interval randomly drawn from a uniform distribution of 4-40 seconds.
Light stimuli were presented via a LED (Thor labs, M470L3) mounted on a post level with the mouse, approximately 20 cm away from the ball, 45 degrees contralateral to the implanted side calibrated to deliver light at varying intensities from 1-27mW, as measured just in front of the LED.
Sound stimuli were presented via a USB speaker placed on the table approximately 20 cm in front of the ball, and calibrated to deliver tones at varying intensities from 80-90 dB, as measured from the location of the mouse.
Data acquisition and synchronization
Data was input (licking, velocity, TTLs, etc) and output (to trigger stimulus delivery, reward delivery, LED, image acquisition, etc) at 2kHz by a custom MATLAB program via the NIDAQ card.
For synchronization of behavioral data with imaging data, TTLs were sent from the cameras to the NIDAQ card 500μs after the beginning of readout for each frame (Hamamatsu HCLive VSYNC).
Behavioral data was then downsampled to match the sampling rate of the neural data by block averaging, i.e., behavioral data was averaged for each frame (binary variables were downsampled in 2 additional ways: rounding and sum).
To record the mouse's facial movements, video recordings were taken using a mounted USB3 camera (Flir Blackfly S USB3 BFS-U3-16S2M-CS), positioned to focus on a side view of the mouse’s face. The behavior cameras were triggered on the TTL output from the imaging cameras, as described above.