BEGIN:VCALENDAR VERSION:2.0 PRODID:-//OpenACalendar//NONSGML OpenACalendar//EN X-WR-CALNAME:Edinburgh Data Visualization Meetup 12 - Open Tech Calendar BEGIN:VEVENT UID:9839@otc.opentechcalendar.co.uk URL:https://opentechcalendar.co.uk/event/9839-edinburgh-data-visualization- meetup-12 SUMMARY:Edinburgh Data Visualization Meetup 12 DESCRIPTION:I See What You Mean - two ways vision helps us understand speec h\n\nGordon McLeod & Ben Hopson of Cirrus Logic (www.cirrus.com)\n\nSpeake r identification technology is increasingly common on mobile devices. The virtual assistant becomes more compelling when it knows who you are and\, in some use cases\, voice ID is a more convenient way of unlocking than by face or fingerprint.\nIncoming audio is first processed into features whi ch are stable\, distinctive and hard to imitate - ideally capturing unique aspects of the speaker's vocal tract. Several different visualisations ar e required to select features and confirm that feature extraction works co nsistently over a large number of speakers and environmental conditions.\n Features are then fed into a classifier which compares the extracted featu res to enrolled users to determine if the audio is recognised. Training\, tuning testing and debugging this system brings with it many of the challe nges of machine learning. There is a need to visualise data in many dimens ions - and it can also be difficult to determine why the system has made a particular decision. Such systems need to be tested in large trials where results are seldom clear cut\, so visualisation is critical during debugg ing to understand overall trends\, while distinguishing individual behavio urs.\n\nBecky Mead & Georgia Clarke of Speech Graphics - Scottish Tech Sta rtup of the Year (www.speech-graphics.com)\n\nSpeech audio contains rich p honetic information which can be visualized in a variety of ways. People w ho communicate with spoken language are acutely sensitive to the relations hip between acoustic phonetics and the way speech articulators move when c reating speech sounds\, which is why bad lip-syncing is so jarring. Speech Graphics uses the information contained in a speech audio signal to simul ate the visual (facial) motion that generated that sound. Our technology i s used to generate accurate facial movement and expressions for character dialogue in a growing number of AAA video games\, among other applications . Becky and Georgia will be showing us:\n\n- An introduction to spectrogra ms (a way of visualizing audio data) and the ways that linguists have used them to analyze speech phonetically\n- How Speech Graphics uses that same phonetic information to simulate facial movement corresponding to a speec h audio signal\n- Demos!\n\nThanks to Cirrus Logic for sponsoring our food and refreshments.\n\nCirrus Logic's engineers and data scientists design intelligent audio chips that power the smartphone in your pocket\, consume r and car audio systems\, and smart homes. As a major presence in the buzz ing local tech ecosystem\, Cirrus Logic is proud to sponsor the Edinburgh Data Visualisation Meetup.\n\n---\n\nAs usual\, there's time and space if you would like to share anything.\nWe're always open to suggestions for to pics and speakers\, so let us know if you have someone or something in min d.\nSee you at the meetup\, and do bring along friends & colleagues.\nChee rs\,\nBrendan (Hill)\, Ben (Bach)\, Uta (Hinrichs)\n\nVENUE: NOTE - this w ill be our first meeting at our our new venue\, Cirrus Logic's office in Q uartermile\, which we plan to alternate with InSpace at the University of Edinburgh School of Informatics from now on. See below for directions.\nht tps://opentechcalendar.co.uk/event/9839-edinburgh-data-visualization-meetu p-12\nPowered by Open Tech Calendar X-ALT-DESC;FMTTYPE=text/html:
I See What You Mean - two ways
vision helps us understand speech
Gordon McLeod &\; Ben Hopson o
f Cirrus Logic (www.cirrus.com)
Speaker identification technology i
s increasingly common on mobile devices. The virtual assistant becomes mor
e compelling when it knows who you are and\, in some use cases\, voice ID
is a more convenient way of unlocking than by face or fingerprint.
Inco
ming audio is first processed into features which are stable\, distinctive
and hard to imitate &ndash\; ideally capturing unique aspects of the spea
ker's vocal tract. Several different visualisations are required to select
features and confirm that feature extraction works consistently over a la
rge number of speakers and environmental conditions.
Features are then
fed into a classifier which compares the extracted features to enrolled us
ers to determine if the audio is recognised. Training\, tuning testing and
debugging this system brings with it many of the challenges of machine le
arning. There is a need to visualise data in many dimensions - and it can
also be difficult to determine why the system has made a particular decisi
on. Such systems need to be tested in large trials where results are seldo
m clear cut\, so visualisation is critical during debugging to understand
overall trends\, while distinguishing individual behaviours.
Becky
Mead &\; Georgia Clarke of Speech Graphics - Scottish Tech Startup of t
he Year (www.speech-graphics.com)
Speech audio contains rich phonet
ic information which can be visualized in a variety of ways. People who co
mmunicate with spoken language are acutely sensitive to the relationship b
etween acoustic phonetics and the way speech articulators move when creati
ng speech sounds\, which is why bad lip-syncing is so jarring. Speech Grap
hics uses the information contained in a speech audio signal to simulate t
he visual (facial) motion that generated that sound. Our technology is use
d to generate accurate facial movement and expressions for character dialo
gue in a growing number of AAA video games\, among other applications. Bec
ky and Georgia will be showing us:
- An introduction to spectrogram
s (a way of visualizing audio data) and the ways that linguists have used
them to analyze speech phonetically
- How Speech Graphics uses that sam
e phonetic information to simulate facial movement corresponding to a spee
ch audio signal
- Demos!
Thanks to Cirrus Logic for sponsoring o
ur food and refreshments.
Cirrus Logic's engineers and data scienti
sts design intelligent audio chips that power the smartphone in your pocke
t\, consumer and car audio systems\, and smart homes. As a major presence
in the buzzing local tech ecosystem\, Cirrus Logic is proud to sponsor the
Edinburgh Data Visualisation Meetup.
---
As usual\, there's
time and space if you would like to share anything.
We're always open
to suggestions for topics and speakers\, so let us know if you have someon
e or something in mind.
See you at the meetup\, and do bring along frie
nds &\; colleagues.
Cheers\,
Brendan (Hill)\, Ben (Bach)\, Uta (H
inrichs)
VENUE: NOTE - this will be our first meeting at our our ne
w venue\, Cirrus Logic's office in Quartermile\, which we plan to alternat
e with InSpace at the University of Edinburgh School of Informatics from n
ow on. See below for directions.
More info: https:/ /opentechcalendar.co.uk/event/9839-edinburgh-data-visualization-meetup-12< /a>
Powered by Open Tech Calendar
DTSTART:20200130T180000Z DTEND:20200130T201500Z LAST-MODIFIED:20200129T112559Z SEQUENCE:110650076 DTSTAMP:20191208T121843Z END:VEVENT END:VCALENDAR