You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: index.bs
+33-17Lines changed: 33 additions & 17 deletions
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,7 @@ Former Editor: Hans Wennborg, Google
16
16
Abstract: This specification defines a JavaScript API to enable web developers to incorporate speech recognition and synthesis into their web pages.
17
17
Abstract: It enables developers to use scripting to generate text-to-speech output and to use speech recognition as an input for forms, continuous dictation and control.
18
18
Abstract: The JavaScript API allows web pages to control activation and timing and to handle results and alternatives.
19
+
Markup Shorthands:css no, markdown yes, dfn yes
19
20
</pre>
20
21
21
22
<pre class=biblio>
@@ -99,7 +100,7 @@ This does not preclude adding support for this as a future API enhancement, and
99
100
User consent can include, for example:
100
101
<ul>
101
102
<li>User click on a visible speech input element which has an obvious graphical representation showing that it will start speech input.</li>
102
-
<li>Accepting a permission prompt shown as the result of a call to <a method for=SpeechRecognition>start()</a>.</li>
103
+
<li>Accepting a permission prompt shown as the result of a call to {{SpeechRecognition/start()}}.</li>
103
104
<li>Consent previously granted to always allow speech input for this web page.</li>
104
105
</ul>
105
106
</li>
@@ -286,17 +287,25 @@ See <a href="https://lists.w3.org/Archives/Public/public-speech-api/2012Sep/0072
1. Let <var>audioTrack</var> be the first argument.
296
-
1. If <var>audioTrack</var>'s {{MediaStreamTrack/kind}} attribute is NOT <code>"audio"</code>, throw an {{InvalidStateError}} and abort these steps.
297
-
1. If <var>audioTrack</var>'s {{MediaStreamTrack/readyState}} attribute is NOT <code>"live"</code>, throw an {{InvalidStateError}} and abort these steps.
298
-
1. Let <var>requestMicrophonePermission</var> be <code>false</code>.
299
-
1. Run the <a>start session algorithm</a> with <var>requestMicrophonePermission</var>.
299
+
Start the speech recognition process, using a {{MediaStreamTrack}}
300
+
When invoked, run the following steps:
301
+
302
+
1. Let |audioTrack| be the first argument.
303
+
1. If |audioTrack|'s {{MediaStreamTrack/kind}} attribute is NOT `"audio"`,
304
+
throw an {{InvalidStateError}} and abort these steps.
305
+
1. If |audioTrack|'s {{MediaStreamTrack/readyState}} attribute is NOT
306
+
`"live"`, throw an {{InvalidStateError}} and abort these steps.
307
+
1. Let |requestMicrophonePermission| be `false`.
308
+
1. Run the [=start session algorithm=] with |requestMicrophonePermission|.
@@ -321,15 +330,22 @@ See <a href="https://lists.w3.org/Archives/Public/public-speech-api/2012Sep/0072
321
330
322
331
</dl>
323
332
324
-
<p>When the <dfn>start session algorithm</dfn> with <var>requestMicrophonePermission</var> is invoked, the user agent MUST run the following steps:
325
-
326
-
1. If the [=current settings object=]'s [=relevant global object=]'s [=associated Document=] is NOT [=fully active=], throw an {{InvalidStateError}} and abort these steps.
327
-
1. If {{[[started]]}} is <code>true</code> and no <a event for=SpeechRecognition>error</a> or <a event for=SpeechRecognition>end</a> event has fired, throw an {{InvalidStateError}} and abort these steps.
328
-
1. Set {{[[started]]}} to <code>true</code>.
329
-
1. If <var>requestMicrophonePermission</var> is <code>true</code> and [=request permission to use=] "<code>microphone</code>" is [=permission/"denied"=], abort these steps.
330
-
1. Once the system is successfully listening to the recognition, [=fire an event=] named <a event for=SpeechRecognition>start</a> at [=this=].
331
-
332
-
</p>
333
+
When the <dfn>start session algorithm</dfn> with
334
+
|requestMicrophonePermission| is invoked, the user agent MUST run the
335
+
following steps:
336
+
337
+
1. If the [=current settings object=]'s [=relevant global object=]'s
338
+
[=associated Document=] is NOT [=fully active=], throw an {{InvalidStateError}}
339
+
and abort these steps.
340
+
1. If {{[[started]]}} is `true` and no <a event
341
+
for=SpeechRecognition>error</a> or <a event for=SpeechRecognition>end</a> event
342
+
have fired, throw an {{InvalidStateError}} and abort these steps.
343
+
1. Set {{[[started]]}} to `true`.
344
+
1. If |requestMicrophonePermission| is `true` and [=request
345
+
permission to use=] "`microphone`" is [=permission/"denied"=], abort
346
+
these steps.
347
+
1. Once the system is successfully listening to the recognition, queue a task to
348
+
[=fire an event=] named <a event for=SpeechRecognition>start</a> at [=this=].
0 commit comments