{"id":140252,"date":"2018-08-13T08:29:54","date_gmt":"2018-08-13T12:29:54","guid":{"rendered":"https:\/\/today.uconn.edu\/?p=140252"},"modified":"2018-08-13T10:04:09","modified_gmt":"2018-08-13T14:04:09","slug":"synchrony-keeps-beat","status":"publish","type":"post","link":"https:\/\/today.uconn.edu\/2018\/08\/synchrony-keeps-beat\/","title":{"rendered":"Synchrony Keeps the Beat"},"content":{"rendered":"<p>UConn neuroscientist Ed Large built a model of the brain that can predict the future. And then he taught it to dance.<\/p>\n<p>Synchrony, a musical light show driven by artificial intelligence, will go on sale to the general public this fall. But Large and his colleagues at UConn and Oscilloscape, the company he founded to develop Synchrony, believe that rather than being a toy, the device and the science behind it shed real light on how we hear and understand music and language.<\/p>\n<p>\u201cWe wanted to do artificial intelligence for sound. Synchrony is designed to act like your brain. It hears the music like you do,\u201d says Large, a professor in the psychological sciences department.<\/p>\n<p>Stevie Wonder\u2019s \u201cFaith\u201d is playing in the background. Large flicks a switch, and Synchrony lights up.<\/p>\n<figure id=\"attachment_140322\" aria-describedby=\"caption-attachment-140322\" style=\"width: 550px\" class=\"wp-caption alignleft\"><a href=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2018\/08\/Synchrony1_formatted.jpg\"><img decoding=\"async\" class=\"wp-image-140322 img-responsive lazyload\" data-src=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2018\/08\/Synchrony1_formatted-1024x683.jpg\" alt=\"Synchrony\u2019s lights flash and shift in patterns that look like they\u2019re dancing, and the colors somehow make sense with the music.\" width=\"550\" height=\"367\" data-srcset=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2018\/08\/Synchrony1_formatted-1024x683.jpg 1024w, https:\/\/today.uconn.edu\/wp-content\/uploads\/2018\/08\/Synchrony1_formatted-300x200.jpg 300w, https:\/\/today.uconn.edu\/wp-content\/uploads\/2018\/08\/Synchrony1_formatted-768x512.jpg 768w, https:\/\/today.uconn.edu\/wp-content\/uploads\/2018\/08\/Synchrony1_formatted-630x420.jpg 630w, https:\/\/today.uconn.edu\/wp-content\/uploads\/2018\/08\/Synchrony1_formatted-150x100.jpg 150w, https:\/\/today.uconn.edu\/wp-content\/uploads\/2018\/08\/Synchrony1_formatted.jpg 1783w\" data-sizes=\"(max-width: 550px) 100vw, 550px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 550px; --smush-placeholder-aspect-ratio: 550\/367;\" \/><\/a><figcaption id=\"caption-attachment-140322\" class=\"wp-caption-text\">Synchrony\u2019s lights flash and shift in patterns that look like they\u2019re dancing, and the colors somehow make sense with the music.<\/figcaption><\/figure>\n<p>It\u2019s impressive. Synchrony\u2019s lights flash and shift in patterns that look like they\u2019re dancing, and the colors somehow make sense with the music. When the music switches to Rihanna\u2019s \u201cPon De Replay,\u201d Synchrony\u2019s style switches, too. This thing is to your typical flashing strip of sound sensitive LEDs what Prince was to your average guitarist. It\u2019s like a DMX show done by a professional, but on a smaller scale. The colors shift and change, it picks up sub-rhythms in the beat like a good, intuitive dancer. When Large says Synchrony hears like a person, it\u2019s easy to believe. It certainly looks like it.<\/p>\n<p>But even if it hears the music like a human, Synchrony is still made of silicon. The brain of the device lives in a small, flat black box. Inside is a bundle of computer chips with a special arrangement called a neural network. As the name implies, neural networks are supposed to mimic networks of brain cells (neurons). Most neural networks use connections between neurons to look for patterns in data. So a neural network built to analyze sound can notice a repeating beat or musical line in a song, and learn to anticipate it.<\/p>\n<p>The prevailing theory of how we hear says the brain does essentially the same thing. When sound enters your ear, it gets picked up by the auditory nerve and sent to the brain stem and then to the thalamus, which collects sensory input of all kinds. The thalamus relays the sound signal to the auditory cortex in the brain above your ear, which then sends messages about the sound throughout the brain. A significant number connect directly to the motor system.<\/p>\n<p>The motor system\u2019s primary job is to move your body. But many scientists now believe that it also helps us find the beat and metric structure in a piece of music. That\u2019s why most people \u201cfeel\u201d the beat in a piece of music. And it\u2019s also how we can dance, according to this model.<\/p>\n<p>But not everybody can dance or march in time to a beat. There\u2019s always that one freshman at band camp \u2026.<\/p>\n<p>\u201cYes, there does seem to be a subset of the population without rhythm. But we don\u2019t know why,\u201d Large says. Neuroscientists guess that in some people, the coupling between the auditory cortex and the motor processing parts of the brain isn\u2019t strong enough to translate into a feeling of movement or beat.<\/p>\n<p>Synchrony has no such difficulty. What it lacks in legs, it makes up for in colored lights. And improvisation. Because yes, Synchrony looks like it improvises. Sections of \u201cPon De Replay\u201d are very repetitive, yet Synchrony didn\u2019t always do exactly the same thing when Rihanna repeated a line. Like a human dancer, it changed things up a little here and there.<\/p>\n<blockquote>\n  <p>We wanted to do artificial intelligence for sound. Synchrony is designed to act like your brain. It hears the music like you do. <cite> &#8212 Ed Large<\/cite><\/p>\n<\/blockquote>\n<p>That spontaneity is by design. Just like a human brain, Synchrony\u2019s neural net oscillates even in the absence of music. When there is music, it synchronizes with the beat, but it may find the beat of the bass drum, or the beat of the vocals. Synchronizing is almost like predicting the future, which is something humans do all the time; based on past patterns, we make a prediction about what will happen next and then act accordingly. And Synchrony\u2019s spontaneous oscillations are somewhat similar to human improvisation, or original thoughts; even though we think we know what will happen next, we don\u2019t always respond the same way. Not when we\u2019re chatting with a friend, not when we\u2019re trying to avoid someone coming the other way on the sidewalk, and definitely not when we\u2019re dancing.<\/p>\n<p>Dance, and the music that inspires dance, are prime examples of this random factor. Modern music recording has taken away a large part of this spontaneity, in many ways lessening the appeal of the music in the process, and Large believes that regaining that improvisational, human touch could be another application of Synchrony.<\/p>\n<p>For example, in a group of musicians performing live, the drummer is usually the one who keeps the beat. If the drummer wants to, she can speed up or slow down ever so slightly, and all the other musicians follow that lead.<\/p>\n<p>But in recording studios these days, a robotic metronome keeps the beat instead of a human drummer. This makes it easier for the producer to record different parts of a song separately and weave them all together, but music recorded to a robotically predictable beat in this way lacks the subtleties and emotional color that an ever-so-slightly changeable beat can give.<\/p>\n<p>Large says that Synchrony\u2019s technology could change that. With Synchrony\u2019s ability to follow a beat like a human, a real live drummer could lay down a line to a song and change the beat as she feels it. The other musicians would listen to it and play or sing their parts on top. And then Synchrony could help the producer mesh it all together with the same precision that the robotic metronome allows \u2013 but with more emotional texture.<\/p>\n<p>So this season of lights, we can enjoy Synchrony as a decoration. But Large and his colleagues hope that eventually, Synchrony doesn\u2019t just make your music look better. They hope it makes it sound better, too.<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>UConn neuroscientist Ed Large built a model of the brain that can predict the future. And then he taught it to dance.<\/p>\n","protected":false},"author":79,"featured_media":140322,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"video","meta":{"_acf_changed":false,"_crdt_document":"","wds_primary_category":0,"wds_primary_series":0,"wds_primary_attribution":0,"footnotes":""},"categories":[1711,2226,2076,2225,70],"tags":[],"magazine-issues":[],"coauthors":[1899],"class_list":["post-140252","post","type-post","status-publish","format-video","has-post-thumbnail","hentry","category-arts-culture","category-clas","category-research","category-uconn-storrs","category-video","post_format-post-format-video"],"pp_statuses_selecting_workflow":false,"pp_workflow_action":"current","pp_status_selection":"publish","acf":[],"publishpress_future_action":{"enabled":false,"date":"2026-04-20 05:17:19","action":"change-status","newStatus":"draft","terms":[],"taxonomy":"category","extraData":[]},"publishpress_future_workflow_manual_trigger":{"enabledWorkflows":[]},"_links":{"self":[{"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/posts\/140252","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/users\/79"}],"replies":[{"embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/comments?post=140252"}],"version-history":[{"count":10,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/posts\/140252\/revisions"}],"predecessor-version":[{"id":140481,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/posts\/140252\/revisions\/140481"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/media\/140322"}],"wp:attachment":[{"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/media?parent=140252"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/categories?post=140252"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/tags?post=140252"},{"taxonomy":"magazine-issue","embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/magazine-issues?post=140252"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/coauthors?post=140252"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}