The Final!
I’ve spent many hours over the last six/eight months looking
at nearly eighty ways to use Speech to Text apps and webpages to help students
get feedback in class on their speaking and suggestions on how to upgrade their
performance prior to redoing the task.
Little by little I have reduced the candidates down to a
dozen, then four and now it’s The Final!
·
https://soundtype.ai/en
·
https://www.notta.ai/en
But this is not sport and there doesn’t have to be a winner:
I think either of these apps would be perfect for use by students in and out of
class to help them improve their speaking.
Here you can read my instructions for how language learners
can use the two best apps:
SoundType AI Instructions NottaAI Instructions
Both apps produce very accurate transcriptions in a
minute or two for a wide range of languages. They both allow at least
eight minutes of transcription per recording ( longer than 99.9% of my
students’ recordings.) Both also offer a generous monthly allowance of
recording time of at least two hours, which is way beyond even my keenest
students’ output.
The two apps are simple for learners to use to:
- Record themselves
- Get transcripts quickly
- Listen back, if needed
- Edit out mistakes in the transcripts
- Copy and paste transcripts into AI to get
feedback and feedforward
The apps don’t require the very latest version of the
Android and iOS operating systems, so most students will be able to use them on
either system:
- Android 7.0 and up
- iPhone iOS 12.1 or later
- One even works on Apple watches!!
How to use these apps:
It is possible for half the students or even more to be
recording themselves at the same time and if they hold their phones as they do
normally, the transcriptions will still work. Earphones are essential if
students are going to listen to their performances. I haven’t been in a
position to test how the time needed to get transcriptions is affected by
having ten or twenty students doing it at the same time, but with two it makes
no difference.
If you are wondering how it’s possible to get so many
students speaking at the same time, here are some suggestions:
· Communicatively with students working in pairs
(main speaker records themselves)
o
retelling a story the other doesn’t know
o
describing a picture or video the other can’t
see
o
being interviewed by other with questions they
can’t see
· As an exercise (everyone recording themselves)
o
reading a text aloud
o
giving the answers to a grammar exercise
o
doing a pronunciation exercise
o
describing a series of pictures that tell a
story
What can the teacher do to help?
- Giving clear instructions
- Checking students are following these
- Monitoring
- Providing help when required (Pause recording)
- Suggesting prompts for the AI part
- Encouraging student to take notes on emergent
language suggested by AI
- Getting students to share these notes in class from
time to time
- Asking for task repetition and recording again
- Offering students the choice of which of their
recordings will be used for continuous assessment every
week/fortnight/month/term
What will be the effect?
According to some theorists learners need between 12 and 20
encounters with a new word to begin to use it themselves. Undoubtedly, this
applies to corrections as well, so the difference between for and since will
not be learnt by many after just one correction. Emergent language must also be
subject to the same requirements as new vocabulary, but fortunately the human
brain will notice further encounters of all new language after the first
‘noticing’ much more easily and so for all common language the 12 to 20
exposures can come relatively quickly if the learner is doing extensive reading
or extensive listening/watching or living in the country where the language is spoken.
Less frequently encountered language may require the use of
some study techniques and here notes taken will be essential as a starting
point.
Now, let’s turn our attention to the two runners-up:
·
https://app.transkriptor.com
·
https://app.fireflies.ai
Here you can read my instructions for how language learners
can use these two apps:
Transkriptor Instructions FirefliesAI Instructions
Transkriptor only loses a place in the final
because they have introduced a new limit for ‘Trial accounts’ that makes use by
students more complicated:
According to Transkriptor's Help desk, "With the trial account, we offer
you 90 minutes of free transcription, each file's transcription output is
limited to 80% of the file duration, up to 7 minutes. Transcription of the
files longer than 7 minutes would be limited to 5:36 minutes in the trial
period.
To get round this new restriction, simply leave the recording on for 25%
longer than you need!
Fireflies lost its place as well because of a
different problem that complicates student use unnecessarily. The problem here
is that there is no way to edit transcripts in the app, although of course this
can be done after it has been exported. But that is where the problem gets even
worse as the process involves four steps rather than one and the instructions
are different for Android phones and iPads/iPhones.