You can make a difference in the Apple Support Community!

When you sign up with your Apple Account, you can provide valuable feedback to other community members by upvoting helpful replies and User Tips.

Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

Why does speech to text (diction) suddenly become counterproductive after you own the device couple years?

Researching my topic I actually stumbled across an older post that attributed this all to an update. However, I am very skeptical that an update caused this. Is it used to be that my experience using the speech to text was very positive and productive, and it actually seem to be learning how I talk. However, now it’s wrong approximately 92% of the time or something ridiculously high like that. It is wrong so frequently that I don’t even want to use it. I am wondering maybe if I delete my dictionary totally with that take it back to zero so to speak? I mean it is so far off I don’t understand how it thinks it heard what it printed out. And as far as the effect it’s had on Siri, I forgot I even had Siri anymore.



Down below is the original last sentence output from diction and typically they don’t match whatsoever:


And as far as the factors head on Siri, I had forgot that I even have Siri anymore.


In closing, I just want to point out that it’s not the phone or microphone that’s defective, because I have three iPhones; iPhone 12 mini, iPhone 12, and an iPhone SE2. And they all act the same.



enclosing I just wanna point out that it’s not the phone or microphone that’s defective because I have three iPhones one is the 12 mini one is a brand new 12 and this one is the iPhone S eat you

iPhone 12, iOS 16

Posted on Dec 25, 2023 8:07 PM

Reply

Similar questions

2 replies

Dec 25, 2023 11:56 PM in response to finesse’n

This is a user-to-user public forum.


This platform serves as a space for users to engage in meaningful conversations, share information, and exchange ideas related to Apple products. It's a community-driven initiative where users support each other by sharing their experiences and technical expertise in using Apple devices. While Apple Inc. is not actively participating in these discussions, the forum is a valuable resource for seeking guidance and assistance from fellow users who possess a wealth of experience with Apple products. Feel free to explore the discussions, ask questions, and benefit from the collective knowledge of this community.



Please note that this is a public forum, so when attaching a screenshot, please avoid including any personal credentials such as IP addresses, card details, email IDs, Apple IDs, IMEI numbers, serial numbers, phone numbers, order IDs, invoices, or any identifiable location information if you are sharing a map.



Hence, it may not be feasible to help with any of the following:


  1. Suggestions and Feedback for improvements --> Apple Inc says; "We read all feedback carefully, but we are unable to respond to each submission individually."
    1. Here is how for iPhone · Country or region* Select your country or region. · Feedback Type* Select feedback type. · Comments* Enter a comment.
    2. Feedback - iPhone - Apple
  2. Any other issue --> Contact – Official Apple Support (IN) & Contact Apple for support and service


Dec 25, 2023 11:57 PM in response to finesse’n

The accuracy and performance of voice dictation can be influenced by contextual factors. Context plays a significant role in speech recognition systems as they rely on patterns, language models, and statistical algorithms to interpret spoken words.


Contextual factors that can impact the accuracy of voice dictation include:


  1. Words that sound similar but have different spellings and meanings can lead to errors in transcription. For example, "some" and "sum" or "thing" and "think" might be misinterpreted due to their similar pronunciation.
  2. The structure and grammar of a sentence can affect how speech recognition systems interpret and transcribe spoken words. Unusual or complex sentence structures, grammatical errors, or incorrect word order can potentially lead to transcription errors.
  3. Voice recognition systems may have a predefined vocabulary and be trained on specific language models. If you use domain-specific or technical terms that are not part of the system's training data, it might struggle to accurately transcribe those words.
  4. Different users may have distinct speech patterns, accents, or pronunciation, which can affect the accuracy of voice dictation. Speech recognition systems typically adapt and improve over time by learning from individual user data.


It's worth noting that advancements in machine learning and artificial intelligence have enabled voice recognition systems to become more contextually aware. These systems utilize deep learning techniques and large datasets to improve accuracy by considering broader linguistic and contextual information.


However, despite advancements, speech recognition systems can still occasionally make errors, and the accuracy can vary depending on the specific implementation, device, and software version. Regular iOS updates may improve performance and address common issues.


Why does speech to text (diction) suddenly become counterproductive after you own the device couple years?

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple Account.