Artwork

Contenido proporcionado por Emily Binder. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Emily Binder o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

056 - Kate O’Neill: Why Technology Must Be Human Centric

25:06
 
Compartir
 

Manage episode 245550222 series 2534774
Contenido proporcionado por Emily Binder. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Emily Binder o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

How do we design technology that is both smart for business and good for people? This conversation asks questions about our approach for voice and AI, oncoming voice tech issues such as deep fakes, and privacy issues such as data mining by Facebook and other tech companies. Author and keynote speaker Kate O'Neill is known around the world as The Tech Humanist. Hear her great approach to keeping technology human and what it will take for emerging technology to be successful from a business standpoint.


Timestamps:

03:15 How do we approach voice design from a human centric way that is also good for business?

04:30 Weather skill example - take context about what someone using the skill needs, like an umbrella

05:20 Business might build voice tech or other tech in order to check a box but it’s better to build for the person on the other end

06:00 Don’t ask “What’s our AI strategy?” - steak back and say “What are we trying to accomplish as a business? 07:00 Who are we building for and how can we serve their needs?”

06:20 Create alignment and relevance between the business and people outside it

07:10 Avoid unintended consequences of technology as it becomes capable of such scale

07:35 Google Translatotron and deep fakes: Translatotron translates spoken word into another language while retaining the VOICE of the original speaker. Read more: https://bigthink.com/surprising-science/translatotron.

08:40 Google would now have your voice - what will they do with it? Voice synthesis and deep fakes - the terrifying possibilities (overall: cool but scary)

How we should approach technology such as the Babelfish (Hitchhiker’s Guide) - simultaneous translation that does not lose integrity originating from the sound of your voice. But one step further: there is sampling of your voice that is sufficient for ML (machine learning) and AI to synthesize your voice.

09:30 Companies must govern themselves (e.g. Google)

09:50 Government has a responsibility to regulate privacy and data models

10:40 Kate doesn’t have smart speakers in her home because we don’t have a precedent for protecting user data, she says

11:20 Facebook Ten Year Challenge (Kate’s tweet went viral in January 2019 over the ten year old photo trend next to current photos of themselves) - she pointed out that this data could be training facial recognition algorithms on predicting aging

13:20 We have seen memes and games that ask you to provide structured information turn out to be data mining (e.g. Cambridge Analytics) - we have good reason to be cautious

14:40 "Everything we do online is a genuine representation of who we are as people, so that data really should be treated with the utmost respect and protection. Unfortunately, it isn't always." - Kate

15:00 Do we need government to regulate tech?

16:10 “Ask forgiveness, not permission” is clearly the case with Facebook so why are users so forgiving?

20:00 What does a future social network look like where there are fewer privacy and data mining and algorithm concerns?


Extra info:

Deep fake (a portmanteau of "deep learning" and "fake") is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network.


Deep fakes and voice emulation: idea of voice skins and impersonation for fraud:

https://qz.com/1699819/a-new-kind-of-cybercrime-uses-ai-and-your-voice-against-you/

"In March, fraudsters used AI-based software to impersonate a chief executive from the German parent company of an unnamed UK-based energy firm, tricking his underling, the energy CEO, into making an allegedly urgent large monetary transfer by calling him on the phone. The CEO made the requested transfer to a Hungarian supplier and was contacted again with assurances that the transfer was being reimbursed immediately. That too seemed believable."

Subscribe to this podcast and listen free anywhere: beetlemoment.com/podcast



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

78 episodios

Artwork
iconCompartir
 
Manage episode 245550222 series 2534774
Contenido proporcionado por Emily Binder. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Emily Binder o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

How do we design technology that is both smart for business and good for people? This conversation asks questions about our approach for voice and AI, oncoming voice tech issues such as deep fakes, and privacy issues such as data mining by Facebook and other tech companies. Author and keynote speaker Kate O'Neill is known around the world as The Tech Humanist. Hear her great approach to keeping technology human and what it will take for emerging technology to be successful from a business standpoint.


Timestamps:

03:15 How do we approach voice design from a human centric way that is also good for business?

04:30 Weather skill example - take context about what someone using the skill needs, like an umbrella

05:20 Business might build voice tech or other tech in order to check a box but it’s better to build for the person on the other end

06:00 Don’t ask “What’s our AI strategy?” - steak back and say “What are we trying to accomplish as a business? 07:00 Who are we building for and how can we serve their needs?”

06:20 Create alignment and relevance between the business and people outside it

07:10 Avoid unintended consequences of technology as it becomes capable of such scale

07:35 Google Translatotron and deep fakes: Translatotron translates spoken word into another language while retaining the VOICE of the original speaker. Read more: https://bigthink.com/surprising-science/translatotron.

08:40 Google would now have your voice - what will they do with it? Voice synthesis and deep fakes - the terrifying possibilities (overall: cool but scary)

How we should approach technology such as the Babelfish (Hitchhiker’s Guide) - simultaneous translation that does not lose integrity originating from the sound of your voice. But one step further: there is sampling of your voice that is sufficient for ML (machine learning) and AI to synthesize your voice.

09:30 Companies must govern themselves (e.g. Google)

09:50 Government has a responsibility to regulate privacy and data models

10:40 Kate doesn’t have smart speakers in her home because we don’t have a precedent for protecting user data, she says

11:20 Facebook Ten Year Challenge (Kate’s tweet went viral in January 2019 over the ten year old photo trend next to current photos of themselves) - she pointed out that this data could be training facial recognition algorithms on predicting aging

13:20 We have seen memes and games that ask you to provide structured information turn out to be data mining (e.g. Cambridge Analytics) - we have good reason to be cautious

14:40 "Everything we do online is a genuine representation of who we are as people, so that data really should be treated with the utmost respect and protection. Unfortunately, it isn't always." - Kate

15:00 Do we need government to regulate tech?

16:10 “Ask forgiveness, not permission” is clearly the case with Facebook so why are users so forgiving?

20:00 What does a future social network look like where there are fewer privacy and data mining and algorithm concerns?


Extra info:

Deep fake (a portmanteau of "deep learning" and "fake") is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network.


Deep fakes and voice emulation: idea of voice skins and impersonation for fraud:

https://qz.com/1699819/a-new-kind-of-cybercrime-uses-ai-and-your-voice-against-you/

"In March, fraudsters used AI-based software to impersonate a chief executive from the German parent company of an unnamed UK-based energy firm, tricking his underling, the energy CEO, into making an allegedly urgent large monetary transfer by calling him on the phone. The CEO made the requested transfer to a Hungarian supplier and was contacted again with assurances that the transfer was being reimbursed immediately. That too seemed believable."

Subscribe to this podcast and listen free anywhere: beetlemoment.com/podcast



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

78 episodios

Todos los episodios

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida