<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On 6 Apr 2020, at 05:04, Aaron van Meerten <<a href="mailto:aaron.van.meerten@gmail.com" class="">aaron.van.meerten@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div style="caret-color: rgb(0, 0, 0); font-family: Palatino-Roman; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class="">I don’t believe the encryption/decryption is all that resource intensive in comparison to the encoding and decoding of the video and audio streams themselves.</div></div></blockquote></div><div class=""><br class=""></div>The MyTurn speaker selection system could play an important role in resolving these difficulties, assuming we are discussing multi-party discussion. The default with Zoom, etc. is that the streaming of video is the choice of users. So, a meeting could have ten or twenty images of listeners being continuously encoded/decoded, even though there is a current speaker that is the focus of attention. The paradox is that most facilitator effort is devoted to ensuring that sound is muted for everyone but the speaker. These efforts have very little effect on encoding resources, since audio is less bandwidth intensive. <div class=""><br class=""></div><div class="">If MyTurn was integrated into Jitsi, there would always be only one speaker using encoding resources and listeners would only have to decode one channel at at time. Since listeners are sending requests-to-speak signals during the current speaker’s turn, the system has advanced knowledge of which channel is going to be sending next and could stage input from the channel of the likely next speaker(s). This would make it easier to avoid any startup delays that would otherwise be encountered. It could also overcome another common problem with current meetings: New speakers often forget to turn on their audio.<br class=""><div class=""><br class=""></div><div class=""><br class=""></div><div class="">dss</div><div class=""><br class=""></div><div class=""><br class=""><div class="">
<div style="color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div style="color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;">David Stodolsky, PhD Institute for Social Informatics<br class="">Tornskadestien 2, st. th., DK-2400 Copenhagen NV, Denmark<br class=""><a href="mailto:dss@socialinformatics.org" class="">dss@socialinformatics.org</a> Tel./Signal: +45 3095 4070</div></div></div>
</div>
<br class=""></div></div></body></html>