Octopus eyes
![octopus eyes octopus eyes](https://daily.jstor.org/wp-content/uploads/2015/06/octopus.jpg)
Training Test (small number of iterations): nohup singularity exec -nv deeplabcut/sandbox_dlc/ python3 scripts/ir/test_train.py & Network Evaluation: mksub jobs/ir/eval.pbsĪnalyzing and annotating videos: mksub jobs/ir/vids.pbs Using the nohup & method: Training Test: mksub jobs/ir/test_train.pbs Using the proper PBS job submission method: These job scripts submit jobs to Discovery and run the scripts located in /scripts.īash commands can be run to directly run the same scripts, however these commands do not use the job scheduling system and are not advised to be used outside of basic testing of the scripts. Once a DeepLabCut project is copied to Discovery, a network can be trained to autonomously annotate frames using the job scripts below (located in /jobs). See the DeepLabCut documentation for project creation and data annotation. Users must locally annotate video frames using the provided GUI, then copy the project to Discovery using scp for training. Users must first use DeepLabCut to create a project. These examples can be run just like the guide, but users will need to modify the scripts to analyze new data. The following guide specifically utilizes a collection of infrared data. I could create a second network for tracking crabs, or in the case of a moving stimulus on a screen time sync the recording and stimulus presentation properly such that I can always predict where the stimulus is. While for static stimuli generating data isn't a hassle, for moving stimuli it is a pain to do manually. I have work to do on the repo as far as better documentation, a package structure for the python modules, additional visualization options (including visual field representation, I need to do more literature review), more numerical analyses, and incorporating in the confidence DLC provides with its predictions into my analysis.Īnother thing to work on is the generation of coordinates for a stimulus. With a robust model trained, I can begin to collect data using two cameras for 3D eye-tracking. I intend to refine the network with more data in the spring, particularly to identify when the eyes are occluded. eye_tracking_vid.mp4 is the finalized output from the python modules and was created from label_data.csv and original_video.mp4. The smaller the angle, the more directly the octopus is viewing the stimulus.Įxample data and the data used in example_usage.ipynb can be found here. The angle between these two lines, theta at the top of the image, is the angle difference (in radians) between these two lines. The eyes and stimulus are annotated, as well as lines from the closer eye to the stimulus and from the center of the visual field of the closer eye outwards. Examplesīelow is an example output frame from the program. More comprehensive documentation is needed for all of the functions, however the notebook example_usage.ipynb shows how users can flexibly use the package to visualize data outputted from a DLC network. Estimates of where the octopus is looking are generated and a numeric estimate of how directly an octopus is tracking an object is predicted.
![octopus eyes octopus eyes](https://i.etsystatic.com/5124735/r/il/eca621/1262939776/il_fullxfull.1262939776_mn8w.jpg)
OCTOPUS EYES FULL
Users must annotate some video frames, then can use the provided job scripts to train networks on the Disocvery cluster and analyze full videos.Īnnotated video frames can then further analyzed using the python modules data_loader, analyze, and visualize. The package enables users to create neural networks to estimate the locations of an octopuses eyes using DeepLabCut. The Octopus Eye Tracking package can be used to autonomously estimate what an octopus can see based off of video footage.