Showing posts with label #error_correction. Show all posts
Showing posts with label #error_correction. Show all posts

Sunday, 9 February 2025

Turboscribe versus Rev - the two best and most generous transcription tools

I asked NotebookLM to do the heavy lifting of comparing three transcripts saved from Turboscribe (x1) and Rev (x2)

This is an edited version of how it replied to my question:

“I am interested in any differences between the three transcripts particularly regarding the use of interjections and the use of phrases like 'like', 'you know', 'um' etc”

Differences appear in the transcripts regarding interjections and phrases such as "like," "you know," and "um." The transcript generated by TurboScribe.ai, includes these kinds of interjections and phrases more frequently compared to the two transcripts generated by Rev.com

For example, the TurboScribe transcript includes the parts in bold that are missing from the Rev transcripts:

·         "[M] Right. It's like we're on the edge of this revolution, you know, and how we learn and practise speaking. Like it's a total paradigm shift"

·         "[M] Exactly. And they went step by step, you know, first analyse the speech, then identify errors"

·         "[M] Exactly. So you got to clean up the transcript a bit. And then that's when the fun starts. Use ChatGPT to actually like analyse your speech and give you feedback"

·         "[M] They encourage teachers to, like, play with it, change it, adapt it, you know, and even share their own versions with other teachers"

These examples show the more conversational style captured by TurboScribe, including interjections ("wow") and fillers ("like", "you know"). The two transcripts captured by Rev appear to have had some of these conversational elements removed or edited out, presenting a cleaner version of the dialogue.


On the other hand, below is a comparison of how Rev.com with the verbatim option selected and Turboscribe, which has no such option, handles transcribing. It is very revealing:

Rev.com:


Rev heard ‘boat’ and ‘things’ and as it had been asked to create a verbatim transcript it didn’t examine how likely this sequence of words was.

Turboscribe.ai:


Turboscribe, on the other hand, correctly interpreted the learner’s intention because ‘bought’ and ‘thinks’ are more likely in the context. But this desire to predict what is more likely led it to also correct ‘womans’ to ‘women’, ‘say’ to ‘said’ and ‘are’ to ‘was’ and to insert ‘can’ into the phrase ‘you can guess’

Both Turboscribe and Rev interpreted what I heard as ‘ask her’ as ‘asked her’, but I imagine that is because the /t/ is hard to produce and often omitted by native speakers. Why ‘that’ was omitted by Turboscribe in the phrase ‘said no’ must also be down to frequency. About 161,000,000 results without ‘that’ v. About 26,900,000 results with ‘that’ according to Bing. 

There are other important differences between free accounts using Turboscribe.ai and Rev.com:

The most important advantage of Turboscribe is that it can be used from the age of 13 with parents’ permission. The other factor to bear in mind is that Rev.com only works in English on a free account at the moment. The transcripts are automatically synchronised while listening on Rev.com, whereas with Turboscribe.ai the user needs to move the page up to follow them.

But the key reason for not being able to recommend Rev.com on a mobile device is that it is: 

No longer possible to edit or download transcripts using a mobile device

 

When using a PC, this is not a problem.

Support on Rev.com is excellent and I haven’t needed to contact support on Turboscribe.ai





Saturday, 25 January 2025

The podcast about a 5-step prompt for students to use to get feedback on their speaking

I chose 10 short recordings made by my pre-intermediate and intermediate students when I was a teacher and got verbatim transcripts using Rev.com 

Perhaps as a result of selecting from only recordings of under one minute, there are more PINT (pre-intermediate) recordings that INT (intermediate) ones. 7 x PINT and 3 x INT

I then made screen recordings on my Android phone of me commenting on the recordings and playing them as I outlined the ideas behind the technique that I'll be talking about at various conferences this year.

You can watch them here They vary in length from under two minutes to just over seven minutes. Here they are in the order I recorded them:


Some were recorded at home (4), but more (6) were recorded in class with the noise of five or six other people speaking in the background. None the less, the transcriptions didn't contain too many mistranscriptions caused by faulty pronunciation or the use of unfamiliar proper nouns.

When looking at my whole collection of student recordings, there are many more recorded in class than recorded at home as everyone recorded themselves at least once in class every day and despite my efforts to persuade students to record themselves at home very few of them did so.

Almost inevitably nowadays, I thought it might work well to upload all10 videos to NotebookLM and I was pleasantly surprised that mp4 files were as acceptable as mp3s.

I used this customisation:

This is a series of short videos based on recordings by Pre-intermediate and Intermediate students made in class or at home. Each video elaborates on Chris Fry's ideas about how students can record themselves, get transcripts and then seek help from ChatGPT using the long 5-step prompt he is developing.

Please concentrate on the overall concept rather than the contents of the students' recordings although you can mention extracts that illustrate the main points of the process.

The resulting podcast was very flattering and I was amazed with how well NotebookLM collected the different comments I made about each of the videos and made a coherent account out of it.

This is a link to the NotebookLM audio overview but I really prefer playing it with a synchronised transcript using Rev.com , which you can do by clicking on the link below.

Transcription of podcast about recordings using Turboscribe ai and ChatGPT to help students boost their speaking

Sunday, 29 December 2024

An experiment with a 5-step prompt with 5 types of GenAI

I decided that it must be possible to include all the prompts for a routine sequence of prompts in one rather long prompt.

I wanted to get:

  1. A student transcript with the errors marked so they could try to see their errors
  2. A corrected version of the transcript
  3. An improved version at the students level (A2, in this case)
  4. An improved version at the next level (B1, in this case)
  5. An improved version at two levels up (B2, in this case)

The secret was to include the five steps in the prompt but to instruct GenAI to wait for the prompt “Next” before moving on to the next step.

I then took a short recording made by a very good A2 student and used Rev.com to get the transcript. The idea is that the student has to copy the long prompt from a WhatsApp group, for example, and paste it into the chosen GenAI and then copy the transcript from Turboscribe.ai or Rev.com  and paste it into the same GenAI.

I did this with these 5 types of GenAI:

  1. ChatGPT
  2. Gemini
  3. Claude
  4. Copilot
  5. Deepseek

I then copied the output from each GenAI for each step and pasted it into Pearson’s GSE Text Analyzer ( https://www.english.com/gse/teacher-toolkit/user/textanalyzer ), which gave me GSE and CEFR levels for all 25 versions of the transcript.

Apart from looking carefully at the English used in each version to try to understand what language had determined the levels given for them all, I also made an Excel spreadsheet with the two sets of levels. With these I was able to produce two graphs. The first one shows what happened with each type of GenAI:


At first glance Claude was the best at producing increasingly sophisticated versions of the student’s transcript, although they were always half a CEFR level too high. Mark you, the original transcript was already half a level higher as she was a very good student.

Equally obvious is the fact that Copilot was useless!

ChatGPT, Gemini and Deepseek failed to produce increasingly sophisticated versions across the four levels, so they didn’t do what I had intended.

In fact, as can be seen in this second chart, the whole idea didn’t work on average:


Once again, this may be as a result of the student’s original transcript being higher than expected (A2+ rather than A2). Maybe the lesson to be learnt from this is that instead of using fixed levels for each step up, the different GenAIs may be able to produce new version of the transcript one level higher on the CEFR scale. This would have the added advantage that the same long prompt could be useful for all students in a class and across all courses.

Here is the original version of the long prompt derived with help from Turboscribe and ChatGPT’s advice. (I hope the concept of including steps and a trigger word will be useful):

“You will be provided a transcript as well as instructions about what to do with that transcript. Unless otherwise specified, your response should be in the same language as the transcript.[1] Here are your instructions, which you must follow:

Instructions:

  1. I will provide a transcript. For each step, you should follow the instructions carefully.
  2. For each task, do not include timestamps unless specifically requested.
  3. After each step, wait for me to say "next" before proceeding. 

Steps:

  1. Mark the errors in the transcript in bold, but do not correct them.
  2. Correct the errors, marking the changes in bold and leaving the original errors in brackets.
  3. Improve the transcript for A2 level students, marking improvements in bold.
  4. Improve the transcript to a B1 level, marking changes in bold.
  5. Finally, enhance the transcript to a B2 level, marking improvements in bold. 

[Sample transcript of an A2 student’s recording:]

The history start in the summer of 2011. Anna went on holiday with some friends on island. The photo was taken on hill. Hill is a little mountain and called Ana. The photo is important for her because the stone is a mysterious for her. And she put your hands around it, the stone, and, and she was sleeping. And this photo there are in other places because in your mobile phone, computer, et cetera.”


[1] These two sentences came from Turboscribe’s Custom prompt

Sunday, 17 November 2024

How to use Rev, ChatGPT, NotebookLM to help students improve their English

I've made a 30-minute video showing how students can record themselves using Rev.com, study a transcript of their recording and ask ChatGPT to help then identify errors, see what they could have said and how they should be able to express the same thing at the end of the course or next year.

A new idea I played with is to upload a series of separate prompts for ChatGPT to a class WhatsApp group so students can simply copy each of them in turn and paste them in instead of having to type them out.




The video was made using an iPad and has been heavily edited.

Rev.com is great as it gives everyone 300 minutes a month on a free account, but at present it only works for English and for people over 18. Turboscribe.ai, on the other hand, works in more than one hundred languages and can be used by people of 13-18 with parental permission as well as for over-18s.


Friday, 2 August 2024

Luggage for Children Turboscribe & ChatGPT - how to get corrections and suggestions for improvments

Breaking News!

Students can now record themselves using Turboscribe.ai

See an example showing how the transcript from Turboscribe can be pasted into ChatGPT to get corrections & an improved version at B1

Luggage for Children Turboscribe & ChatGPT

Thursday, 1 August 2024

Transcript correction and enhancement by Gemini, ChatGPT, Copilot or Claude


https://g.co/gemini/share/16c0248c7a9f
Transcript correction by #Gemini can be just as useful as that provided by #ChatGPT or #Copilot or #Claude
Try this prompt with them all:

Can you correct this transcript marking the errors and the corrections in some way? One evening in October, Hannah was at work. She was meeting Jamie at 5.30. She looks her watch and it was 5.20. It was raining and she was in a hurry because she has 10 minutes to meet Jamie. She was driving along the high street and she was driving very fast. Suddenly a man crossed the road and Hannah didn't see him.
Check to see if Read Aloud works on them all as it is very useful for #EFL #MFL #ESL students. 

Then use this prompt:

How could an ... level student express this better? try with A2, B1, or B2

The transcript was produced by #Turboscribe

But the innovative thing is that Turboscribe now offers a microphone option, so students can record themselves live three times a day for up to 30 minutes each time with a free account. 

Wednesday, 12 June 2024

Asking ChatGPT to use strikethrough to mark errors

 Can you correct the errors and mark the corrections in bold and show the mistakes before the corrections marked with strikethrough ? In ChatGPT

Why didn't I think of asking it to use strikethrough before?




Does it work with Copilot, Gemini and Claude, too?



Wednesday, 31 January 2024

A video showing how your students can use SoundType AI to get feedback on their speaking


I decided to record a video of a simulation of the complete process of how a student can record herself on SoundType AI, get a transcript, edit it, and copy it to Copilot and ask for it to be corrected and for suggestions for improvements at her level. It includes listening practice, too!

In this situation the student would already have the notes they used to retell the story and the same notes and any further notes from the feedback from AI could be used to repeat the task, recording it again on SoundType AI (obviously) and repeating the request for correction and suggestions for improvements.

I’m planning to look next at getting students using Notta AI and ChatGPT in the same way.

Information about SoundType AI and how to download it

Instructions on how to use SoundType AI

Tuesday, 23 January 2024

Instant feedback on speaking in class for everyone

The Final!

I’ve spent many hours over the last six/eight months looking at nearly eighty ways to use Speech to Text apps and webpages to help students get feedback in class on their speaking and suggestions on how to upgrade their performance prior to redoing the task.

Little by little I have reduced the candidates down to a dozen, then four and now it’s The Final!

·         https://soundtype.ai/en 

·         https://www.notta.ai/en    

But this is not sport and there doesn’t have to be a winner: I think either of these apps would be perfect for use by students in and out of class to help them improve their speaking.

Here you can read my instructions for how language learners can use the two best apps:

SoundType AI Instructions                                    NottaAI Instructions

Both apps produce very accurate transcriptions in a minute or two for a wide range of languages. They both allow at least eight minutes of transcription per recording ( longer than 99.9% of my students’ recordings.) Both also offer a generous monthly allowance of recording time of at least two hours, which is way beyond even my keenest students’ output.

The two apps are simple for learners to use to:

  • Record themselves
  • Get transcripts quickly
  • Listen back, if needed
  • Edit out mistakes in the transcripts
  • Copy and paste transcripts into AI to get feedback and feedforward

The apps don’t require the very latest version of the Android and iOS operating systems, so most students will be able to use them on either system:

  • Android 7.0 and up
  • iPhone iOS 12.1 or later
  • One even works on Apple watches!!

How to use these apps:

It is possible for half the students or even more to be recording themselves at the same time and if they hold their phones as they do normally, the transcriptions will still work. Earphones are essential if students are going to listen to their performances. I haven’t been in a position to test how the time needed to get transcriptions is affected by having ten or twenty students doing it at the same time, but with two it makes no difference.

If you are wondering how it’s possible to get so many students speaking at the same time, here are some suggestions:

·     Communicatively with students working in pairs (main speaker records themselves)

o   retelling a story the other doesn’t know

o   describing a picture or video the other can’t see

o   being interviewed by other with questions they can’t see

·      As an exercise (everyone recording themselves)

o   reading a text aloud

o   giving the answers to a grammar exercise

o   doing a pronunciation exercise

o   describing a series of pictures that tell a story

What can the teacher do to help?

  •    Giving clear instructions
  •    Checking students are following these
  •     Monitoring
  •     Providing help when required (Pause recording)
  •     Suggesting prompts for the AI part
  •     Encouraging student to take notes on emergent language suggested by AI
  •     Getting students to share these notes in class from time to time
  •      Asking for task repetition and recording again
  •      Offering students the choice of which of their recordings will be used for continuous assessment every week/fortnight/month/term

What will be the effect?

According to some theorists learners need between 12 and 20 encounters with a new word to begin to use it themselves. Undoubtedly, this applies to corrections as well, so the difference between for and since will not be learnt by many after just one correction. Emergent language must also be subject to the same requirements as new vocabulary, but fortunately the human brain will notice further encounters of all new language after the first ‘noticing’ much more easily and so for all common language the 12 to 20 exposures can come relatively quickly if the learner is doing extensive reading or extensive listening/watching or living in the country where the language is spoken.

Less frequently encountered language may require the use of some study techniques and here notes taken will be essential as a starting point.

Now, let’s turn our attention to the two runners-up:

·         https://app.transkriptor.com

·         https://app.fireflies.ai


Here you can read my instructions for how language learners can use these two apps:

Transkriptor Instructions                                        FirefliesAI Instructions

Transkriptor only loses a place in the final because they have introduced a new limit for ‘Trial accounts’ that makes use by students more complicated:

According to Transkriptor's Help desk, "With the trial account, we offer you 90 minutes of free transcription, each file's transcription output is limited to 80% of the file duration, up to 7 minutes. Transcription of the files longer than 7 minutes would be limited to 5:36 minutes in the trial period.

To get round this new restriction, simply leave the recording on for 25% longer than you need!

 

Fireflies lost its place as well because of a different problem that complicates student use unnecessarily. The problem here is that there is no way to edit transcripts in the app, although of course this can be done after it has been exported. But that is where the problem gets even worse as the process involves four steps rather than one and the instructions are different for Android phones and iPads/iPhones.

Saturday, 20 January 2024

Instructions for the 'Final Four' best transcription apps for language learners 4/4 – Fireflies AI

 Fireflies Instructions:

Address

https://app.fireflies.ai

Accuracy

92%

Number of Languages

69

What you get free

800 minutes storage Unlimited transcription

Android 8.0 and up,
iPhone iOS 13.0 or later

ChatGPT requires Android 6.0 and up or iOS 16.1 or later
Copilot requires Android 8.0 and up or iOS 15.0 or later 

 

Recording yourself:

Go to Home and click on +Transcribe, choose Record audio, click on the red circle, and record yourself. (You may need to give fireflies.ai permission to use the microphone.) You can pause the recording if necessary. When you end the recording, give it a name that you will recognise and click on Upload. Click on View status and the transcription will be ready in less than a minute.

Click on the recording when it says Completed and pause it. Then click on Transcript in the middle, lower down. Click on Speaker 1 and write your name, tick Apply to all paragraphs from this speaker and click on Update.

 You can’t edit the transcript here, but see below.

Exporting/Sharing/Copying your transcription:

Click on the three dots at the top on the right and choose Download meeting, then Download Transcript. Then uncheck Include timestamps and uncheck include speakers if it’s a monologue and choose Word (DOCX).

Android

iPad/iPhone

When it says Download this file? click on Download. Click on the three dots on the right of the file, choose Open with, Docs.


Editing the transcription:

Click on the Pen icon and you can correct any obvious mistakes in it.

 

Click on Download and then Show. Click on the file to open it. Click on the Upload icon in the top corner on the right, Swipe the row of coloured icons left and click on More, Swipe up and click on Docs.

Editing the transcription:

Click on Save to Drive and click on the Pen icon and you can correct any obvious mistakes in it.

 

Exporting/Sharing/Copying your transcription:

Hold your finger down on the text and Select all the text and click on Copy.

Open ChatGPT or Copilot and write your instructions (Can you correct my mistakes and mark the corrections in bold?”) at the start of the prompt and then paste in the transcript.

Friday, 19 January 2024

Instructions for the 'Final Four' best transcription apps for language learners 3/4 – Notta AI

 Notta Instructions:


Address

https://www.notta.ai/en   

Accuracy

94%

Number of Languages

104

What you get free

120 minutes / month + 10 prompts

Android 7.0 and up,
iPhone iOS 11.0 or later
Apple watchOS 7.0 or later

ChatGPT requires Android 6.0 and up or iOS 16.1 or later
Copilot requires Android 8.0 and up or iOS 15.0 or later 

From Notta AI terms and conditions - "The Services are not intended for and should not be used by anyone under the age of 18. By using the Services, you represent and warrant that you meet the foregoing eligibility requirement."

Setting the default transcription language:

Go to Home and click on the plus sign, click on the language under The current transcription language is. Choose one of the 104 languages by starting to type the language you want in the Search box at the top and then click on it when it appears below. Click on Done in the top right-hand corner to finish.
If you want to change the App language, click on the Mine icon at the bottom on the right and then on Advanced. Click on App Language and you can choose from the four languages offered at present: English (United States), Japanese, Chinese (Mandarin) or Chinese (Traditional), Click on Done in the top right-hand corner, and then the arrow in the top left-hand corner, and then Home at the bottom on then left.

Recording yourself:

Go to Home and click on the plus sign, choose Record an Audio and you can record yourself immediately. (You may need to give Notta permission to use the microphone.) You can pause the recording if necessary. When you have finished click on Stop and give the recording a name you will recognise.

Editing the transcription:

Click on Unknown Speaker and type your name at the bottom where it says Add or search for a speaker. Tick Apply to all related blocks and click on the blue tick on the right. You can click on the Edit icon at the top on the left and then you can correct any obvious errors in the transcript. Click on Save.

Exporting/Sharing/Copying your transcription:

Click on the three dots at the top on the right and choose the Copy icon

Open ChatGPT or Copilot and write your instructions (Can you correct my mistakes and mark the corrections in bold and remove the timestamps?”) at the start of the prompt and then paste in the transcript.

 

Thursday, 18 January 2024

Instructions for the 'Final Four' best transcription apps for language learners 2/4 – Transkriptor

 Transkriptor Instructions:

Address

https://app.transkriptor.com

Accuracy

96%

Number of Languages

60

What you get free

90 minutes of free transcription

5 minute limit
+ Earn 120 minutes if you install Transkriptor and leave a review on Google Play, App store, Chrome store, TrustPilot
+ Earn 90 minutes for each platform you share: Twitter or Facebook

+Earn 180 minutes for each review on Capterra or G2

Android 5.0 and up,    
iPhone iOS 13.0 or later

ChatGPT requires Android 6.0 and up or iOS 16.1 or later
Copilot requires Android 8.0 and up or iOS 15.0 or later 
 According to Transkriptor's Help desk, "With the trial account, we offer you 90 minutes of free transcription, each file's transcription output is limited to 80% of the file duration, up to 7 minutes. Transcription of the files longer than 7 minutes would be limited to 5:36 minutes in the trial period.
To get round this new restriction, simply leave the recording on for 25% longer than you need

Recording yourself:

Go to Records and then click on Start Recording and record yourself. (You may need to give Trankriptor permission to use the microphone.) You can pause the recording if necessary. When you end the recording, click on Transcribe (You may need to specify that the language is English and you want the Standard service.)

A four-minute recording will be transcribed in less than two minutes.

Editing the transcription:

You can change the Spk_1 to your name if your click on the pen in the top righthand corner to Edit. It will change then all. At the same time you can correct any obvious errors in the transcript. Click on Save to save your changes.

Exporting/Sharing/Copying your transcription:

Get Full Transcript won’t work unless you upgrade.  Click on the Share icon at the bottom on the right and choose Share Text and then Only Text. On an Android phone the Copy icon is half way down on the right. On an iPhone/iPad you need to pull the page up to see it.

Open ChatGPT or Copilot and write your instructions (Can you correct my mistakes and mark them in bold?”) at the start of the prompt and then paste in the transcript.